Processing inside the Camera internal processor
I want to do some custom pre-processing on the RGB images before they are used for getting the depth frames from them. This pre-processing includes cropping and some algorithm that is needed for my use case. How can I do this?
I have checked on the portal that we cannot do the post-processing in the internal vpu of the camera, but I am asking this again, is it possible to change the code within the API to do some post processing inside the internal vpu of the camera?
I found this code on the following link:
rs2::pipeline pipe;
pipe.start();
const auto CAPACITY = 5; // allow max latency of 5 frames
rs2::frame_queue queue(CAPACITY);
std::thread t([&]() {
while (true)
{
rs2::depth_frame frame;
if (queue.poll_for_frame(&frame))
{
frame.get_data();
// Do processing on the frame
}
}
});
t.detach();
while (true)
{
auto frames = pipe.wait_for_frames();
queue.enqueue(frames.get_depth_frame());
}
Now, can I not just add my code in the code body where it says: """ // Do processing on the frame """ ?
-
Hi Dhruvdarda2001 It is not possible to perform the post-processing on the camera hardware, unfortunately. The camera does not have a CPU and only has 16 mb storage space in its EEPROM flash memory component for storing the firmware driver.
RealSense cameras do not obtain the depth frame from the RGB image. It is instead calculated using raw left and right infrared images (not the Infrared and Infrared 2 streams) in the camera hardware. Because left and right infrared frames are used, this is why the camera's technology is called stereo depth.
As you are using C++ code, you could potentially improve performance by using the SDK's GLSL Processing Blocks system to offload work from the CPU to a graphics GPU on the computer. GLSL is 'vendor neutral', meaning that it should work with any GPU brand including built-in ones, though the difference in performance may not be noticable on low-end computers / computing devices.
The link below has a very good pros and cons analysis of GLSL processing blocks and when it can be used.
https://github.com/IntelRealSense/librealsense/pull/3654
There is also a C++ rs-gl example program.
https://github.com/IntelRealSense/librealsense/tree/master/examples/gl
-
GLSL would be problematic for Python because its implementation involves editing C++ RS2:: script instructions to RS2::GL:: but pyrealsense2 does not use RS2:: commands.
The other way to offload processing from CPU to GPU is to use an Nvidia Jetson board as the computer, as the RealSense SDK's CUDA support can then be enabled to automatically GPU-accelerate alignment and pointclouds without having to edit any code.
Please sign in to leave a comment.
Comments
3 comments