Converting Depth Frame to Color Frame With RealSense Depth Cameras
Once the depth frame is read from the RealSense Depth Camera is there any API to convert the depth frame to color frame or obtain the color frame out of the depth frame ?
-
Hi Sandesh Kumar S What you are describing sounds like colorization of the depth image to treat it as an RGB image and then using a 'recovery' process to restore the depth image from the RGB image. Intel have a depth image compression by colorization white-paper document on this subject.
-
Hi MartyG,
Thanks for providing the information.
But what I was looking for is if we can obtain the color frame from a colorized depth frame.
If you look at the above example image is it possible to obtain the right image from the left image.
After calling the get_depth_frame() function, instead of calling the get_color_frame() can we convert the depth frame to a regular RGB by eliminating the depth field?
Thanks,
Sandesh
-
The camera's RGB stream comes from an RGB sensor that is a separate component from the depth sensor. So the RGB and depth streams come from two different sources. The depth image is constructed from the left and right infrared sensors.
You can get an RGB-like color image from the left infrared sensor on the D415 and D455 camera models by setting the left infrared to the RGB8 format instead of Y8. So the depth and the RGB8 would both be coming from the same sensor. Its colors are not quite the same as a full RGB image though.
https://github.com/IntelRealSense/librealsense/issues/7897#issuecomment-736370516
https://github.com/IntelRealSense/librealsense/issues/7870
I do not know of any way though to change the frame format from one format to another in real-time so that the frame is retrieved by get_color_frame instead of get_depth_frame.
You may gain more control over frames by manually setting up your own custom frameset.
-
Yes, there are GStreamer plugins that are compatible with the 400 Series cameras. The links are provided below.
https://github.com/WKDSMRT/realsense-gstreamer
https://www.aivero.com/2020/03/gstreamer-elements-realsense-open-sourced/
The documentation for the aivero plugin states: "We have also made available the `rgbdmux` and rgbddemux` elements, respectively muxing and demuxing our `video/rgbd` to the contained elementary streams, i.e. on the D400 series these would be `depth, infra1, infra2, colour`. These demuxed video streams can now be used like any other video stream inside GStreamer, giving developers access to all the powerful tools GStreamer provides".
A RealSense user also wrote a script for streaming both depth and RGB data simultaneously over a network.
-
There have been past cases of RealSense users making use of the v4l2src GStreamer plugin with the 400 Series cameras. One of them found that their stream would only work if a full size USB 3 port was used instead of a micro-size USB 3 OTG port.
https://github.com/IntelRealSense/librealsense/issues/4170
I didn't see confirmation in my research of your question that someone had been successful in streaming depth. Dorodnic the RealSense SDK Manager has said in the past though that standard Linux tools can access raw camera data if librealsense is built with the VL42 backend:
https://github.com/IntelRealSense/librealsense/issues/6841#issuecomment-660859774
-
I was unable to find a straightforward example of depth being captured with OpenCV's VideoCapture(), though there was a tutorial from someone who solved the problem by creating a class that was like VideoCapture().
https://titanwolf.org/Network/Articles/Article?AID=6d47b992-6d96-4e42-9393-bd7b50c3836c#gsc.tab=0
The same website also published an article about getting depth via VideoCapture using the RealSense implementation of Kinfu (KinectFusion):
https://titanwolf.org/Network/Articles/Article?AID=185dd26d-d5f3-45ad-87b3-ceef9c3020c3#gsc.tab=0
-
float rs2::depth_frame::get_units()const (http://docs.ros.org/en/kinetic/api/librealsense2/html/classrs2_1_1depth__frame.html#a4a71c62f44d2554c9a7359169c0c744e ) function returns the depth scale value. Is it possible to calculate the depth value at a current pixel with the depth scale value ?
-
You can either use get_distance() or you can multiply the 16-bit pixel depth value uint16_t by the depth unit scale to obtain the distance in meters. These two methods are compared in the discussion in the link below.
-
cv::mat could be converted back to rs2::frame
-
I do not have information that precisely describes differences between rs2::frame and cv::mat. Typically, the conversion process involves storing the height (h) and width (w) of the rs2 frame and using those values in a Mat image(Size(w, h) calculation, like the RealSense SDK OpenCV code snippet in the link below.
BTW, my research found an alternative approach for converting cv:mat to rs2::frame.
-
I am not certain of what you are describing. Would the Keep() method of storing frames in the computer's memory until the pipeline closes and then performing batch processing operations on all of the stored frames (e.g post-process, align, save to file) meet your needs?
-
Consider the following line of code :
rs2::video_frame color = data.get_color_frame();
Here the first color frame is returned.
Is it possible to create a buffer using malloc and pass the buffer to the get_color_frame() ?
Something like this:
// create buffer
rs2::video_frame color = data.get_color_frame(buffer);
and then pass the same buffer to get_depth_frame()
rs2::video_frame color = data.get_depth_frame(buffer);
My goal is to have a single buffer contain both the color and depth frame
-
This is a difficult question. I suspect that the solution may lay in defining your own custom frameset using software_device or a custom processing block, as described in the link below.
-
I have read a depth frame from the RealSense camera and converted it to cv mat format. Now I trying to encode this depth frame by applying jpeg compression using cv imencode and later decode it using cv imdecode. But I am noticing that the depth frame getting malformed after cv imencode and decode. Is there any reference of doing opencv imencode and decode on a realsense depth frame
-
My research did not find much information about use of imencode with RealSense specifically, though did locate an imencode-using RealSense program for pyrealsense2 (Python) for sending and receiving data with OpenCV.
-
The get_data() member function of the rs2::frame class retrieves data from frame handle. In an application I want to send this data over a message bus. Now at the other end of the application I want to again create a rs2::frame with this frame data.
Is it possible to construct a new rs2::frame with the frame data ? Is there any APIs for it ?
-
Hi Sandesh Kumar S If you want to create your own custom frameset, you can do so with software_device or by creating a custom processing block.
-
I do not want to create a custom frameset. I want to re-create a rs2::depth_frame from the bytes data of the frame.
Also I wanted to understand if there is any API to query the max width, height and framerate supported by realsense camera.
In default configurations the camera is ingesting the max resolution frame.
-
For the reference of other readers of this case, I will post a link to the GitHub version of the above questions.
-
Hi MartyG with just the pointer to the depth frame data, how can I construct a rs2::depth_frame ?
I cannot use a processing block or a software_device as in my application I do not have the source rs2::depth_frame. Using get_data() function we obtain the pointer to frame data and pass it to our application.
For instance OpenCV provides Mat constructors which accepts pointer to user data to construct a cv mat frame(https://docs.opencv.org/master/d3/d63/classcv_1_1Mat.html#a51615ebf17a64c968df0bf49b4de6a3a).
Is there any similar approach in rs2 ?
-
If your aim is to convert a cv::mat to rs2::frame, does the cv::mat conversion process referenced earlier in this discussion not provide this ability in a way that fits your project, please?
Please sign in to leave a comment.
Comments
50 comments