View my account

How to get the overlapping FOV of the left and right infrared sensors for D435i?

Comments

3 comments

  • MartyG

    Hi Yxliuwm  The diagram below from page 81 of the current edition of the 400 Series data sheet document may be what you are looking for.

    https://www.intelrealsense.com/wp-content/uploads/2022/11/Intel-RealSense-D400-Series-Datasheet-November-2022.pdf

     

     

    A RealSense user plotted their own D435i FOV diagrams here:

    https://support.intelrealsense.com/hc/en-us/community/posts/4405875311123-About-make-sure-FOV-specification-of-D435i

     

    A paper at the link below suggests a method for mathematically calculating the overlap of left and right sensors on a stereo depth camera such as the RealSense 400 Series.

    https://www.voptronix.com/papers/stereo3d/index.html

     

     

    I am not aware of a method for calculating the overlapping shaded area with RealSense SDK programming instructions, unfortunately.

    0
    Comment actions Permalink
  • Yxliuwm

    Thank you so much for these very helpful links!

    What I want to do is to find the accurate 3D coordinate for some special points as shown in the picture.

    One way I tried is to detect this corner on the 2D left infrared image and then based on the depth stream, get its depth as follows:

    float Z = dst.at(i) * 0.001 * 1000.0;//dst is the depth frame            

    Then calculate the 3D coordinate based on the detected 2D corner and the depth Z as follows.
    float X = (pt2D.x - cx) * Z / fx;//pt2D is the detected corner, fx,fy are the focal length and the cx is the principal point of the left infrared
    float Y = (pt2D.y - cy) * Z / fy;

    However, using this method, the 3D coordinate is not accurate. The accuracy what I want to reach should be around 1.0 millimeter with working distance around 400 millimeter.

    Since the detected 2D corner is subpixel accuracy, so I think most error come from the depth map. I only care about the depth at some interesting point rather than an entire dense map. I speculate the SDK does some postprocessing such as smoothing or hole filling etc.  at the cost of the accuracy.

    So I plan to try another method. This method will detect the corner on both the left and right infrared images and then use triangulation to get the 3D coordinate. In other words, calculating the depth on my own instead of based on the provided dense depth map. To easily find the correspondence between the left and right infrared image, I need to get the shared FOV. This is the reason I post this question. 

    My final purpose is to get the accurate 3D coordinate for some special corners. Probably I need to try D415 since it provides better depth accuracy. Any suggestion will be very appreciated.

     

     

     

    0
    Comment actions Permalink
  • MartyG

    Part of the problem may be that the area of the image that is being depth sensed is dark gray / black in color.  It is a general physics principle (not specific to RealSense) that dark gray or black absorbs light and so makes it more difficult for depth cameras to read depth information from such surfaces.  The darker the color shade, the more light that is absorbed and so the less depth detail that the camera can obtain.

     

    Also, the corner point that you have highlighted is very small and thin and so may have difficulty being captured onto the depth image by the camera.

     

    You can obtain 3D XYZ coordinate for a single coordinate without using alignment or pointcloud by using the RealSense SDK instruction rs2_project_color_pixel_to_depth_pixel which converts a color pixel into a depth pixel.

    0
    Comment actions Permalink

Please sign in to leave a comment.