Imaging of depth image and ranging principle of depth value
Dear MartyG, hello! What kind of images are taken by the left and right infrared sensors of the D435i camera? Also, how do these two cameras synthesize depth images? Finally, I would like to ask you what is the depth ranging principle of D435i?
-
Hello! The left and right sensors of a D435i capture monochrome infrared images. The camera hardware can capture a left and right infrared image and use them to generate a depth frame. The data sheet document for the RealSense 400 Series cameras provides the following explanation for how depth frames are produced:
"The left and right imagers capture the scene and send imager data to the depth imaging (vision) processor, which calculates depth values for each pixel in the image by correlating points on the left image to the right image and via the shift between a point on the Left image and the Right image. The depth pixel values are processed to generate a depth frame."
https://dev.intelrealsense.com/docs/intel-realsense-d400-series-product-family-datasheet
The D435i camera model can depth-sense up to 10 meters from the camera. Depth measurement error starts at around zero at the camera lenses and increases linearly over distance. So drift in accuracy on the D435i model will start to become noticeable after 3 meters.
-
The amount of difference depends on the RealSense camera model being used.
On the D435i camera model, where the RGB color sensor's field of view (FOV) size is smaller than the left / right sensors' FOV size, aligning depth to color can cause the outer edges of the depth detail to be cut off on the aligned image.
When aligning color to depth though, this outer edge detail is not lost because the color image stretches to fit the larger depth field of view.
Please sign in to leave a comment.
Comments
4 comments