How to get depth information (depth pixel) and extract the *.raw file ?
Hello everyone, I am using the L515 for capture the RGB, Depth and IR of the pavement surface. Currently, I am working on RGB-D object detection. I am acquiring a new dataset for my detection. Therefore, it is require for labeling (annotation) the image on RGB image. The labeling store the information coordinate of the pixel in RGB. However, I have few questions below:
1. When I save the depth data from intel realscence viewer, it generates the *.raw file. How can I extract the raw file to get depth information ?
2. I have been notice that the depth data seem larger fields of view. Also, the resolution of RGB and the depth is different. How to find the depth information corresponding to a pixel in RGB image and fit the same resolution ?
3. How to do fusion between the point cloud coordinate to the RGB image.
For do the configuration, Matlab and Python are fine for me. Thank you very much.
Read Pointclouds and Frame Alignment for information on creating pointclouds and aligning depth to color using the RealSense SDK 2.0. You can also view the python samples, OpenCV Pointcloud Viewer and AlignDepth2Color.
Intel Customer SupportComment actions
Dear Andi-1 0
I am a new in this area. I am using the new version of sdk viewer. During the streaming, I showed the RGB, Depth and IR. After that, I save the depth information and its automatically generates two files: *.png and *.raw. Is there any methods to save the depth information ? you mentioned that you save the *.bag file. Does this file provide the depth information corresponding to the RGB image ? Thank you.
Hi, I might found a way you could get the *bag file or all other formats:
Also checkout: https://intelrealsense.github.io/librealsense/python_docs/_generated/pyrealsense2.html
If you're using python, try this:
import pyrealsense2 as rs
pipe = rs.pipeline()
config = rs.config()
config.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30)
frames = pipe.wait_for_frames()
Please sign in to leave a comment.