View my account

what is depth value generate from L515 ?

Comments

9 comments

  • Aznie Syaarriehaah

    Hi Lyhour Newtechnology,
    The value represents the distance from the camera to the object. The L515 sends a laser beam and measures the returned time of the reflected light off an object. The L515 accomplishes this using a continuous coded IR beam. The time taken for the light to return to the camera is used to calculate the distance of the object. By repeating this process millions of times a second, the L515 is able to build a high-resolution depth map of the scene.

    Meanwhile, if you align the depth frame to the color frame, the depth value will represent the distance that corresponds to the RGB pixel. A depth image is an image channel in which each pixel relates to a distance between the image plane and the corresponding object in the RGB image.

    You can find more information regarding these in the L515 User Guide documentation.

    Regards,
    Aznie
    Intel RealSense Customer Support

    0
    Comment actions Permalink
  • Lyhour Newtechnology

    Thank you very much for you kind answer. I understand the depth value now. However, you mention that the depth value will represent the distance that corresponding to the RGB pixel when align the depth to color. But, when I tried it I found that there is some difference coordinate between alignment RGB and depth as show in the attach figure 1. Also, I have been change align the color to depth, it generate bad alignment as show in the attach figure 2. Is there any solution that I can solve this problem ? thank you very much sir.

    Best Regards,

    CHHAY LYHOUR

    0
    Comment actions Permalink
  • Aznie Syaarriehaah

    Hi Lyhour Newtechnology,
    Have you tried to run the align-depth2color.py example? The bad depth image might due to the lighting condition of your space. Infrared light from the sun from windows can interfere with the device's performance, which can degrade the quality of the depth images when used outdoors. As a power-efficient LiDAR camera, L515 will perform best indoors or in controlled lighting conditions. Meanwhile, the depth accuracy of the device is as follows:

    Depth error average at 1m distance from the camera is <5mm*
    Depth error std deviation at 1m distance from the camera is 2.5mm*
    Depth error average at 9m distance from the camera is <14mm*
    Depth error st deviation at 9m distance from the camera is 15.5mm*
    *at 95% reflectivity

    The blurry depth image that you see might be because of the laser beam's poor reflection towards the object surface.

    Regards,
    Aznie 

    0
    Comment actions Permalink
  • Lyhour Newtechnology

    Thank you very much for your reply. The figure I showed you was conducted by using align-depth2color.py from the python example of Intel Realsense. My first thinking that the blurring or noise in the depth image when align depth to color is induced by the no depth information generated from the sensor. However, I am really concern about the coordinate of align depth not well corresponding to the color coordinate. How to get rid of these problems ? Thank you very much sir. 

    Best Regards,
    LYHOUR

    0
    Comment actions Permalink
  • Aznie Syaarriehaah

    Hi Lyhour Newtechnology,

    The gaps in data between the aligned RGB and depth frames occur because the FOVs of the RGB and depth sensors do not completely overlap. By the way, in your previous reply, what do you mean by "there is some difference coordinate between alignment RGB and depth as shown in the attach figure 1."? How do you measure it?

    Regards,

    Aznie

    Intel RealSense Customer Support

     

    0
    Comment actions Permalink
  • Lyhour Newtechnology

    Thank you for your reply sir. Base on the attach figure 1, it noticed that the region at the right side is not well alignment. Please see the green circle that I draw. 

    0
    Comment actions Permalink
  • Aznie Syaarriehaah


    Hi Lyhour Newtechnology,
    That happens because the FOV is not the same and the color to depth is not aligned. When you align the color to depth, the pixel is not available because there is no data depth and it cannot completely overlap.

    Regards,
    Aznie
    Intel RealSense Customer Support

    0
    Comment actions Permalink
  • Lyhour Newtechnology

    Thank you very much sir. I understand the problem. So how can I get rid of these problem ? is there any way to solve it ?

    0
    Comment actions Permalink
  • Aznie Syaarriehaah
    (assign)
     
    Hi Lyhour Newtechnology,
    You can try to upgrade to the latest version of librealsense, 2.44.0. Run the python sample that is included in 2.44 and the rs-align C++ sample. You may get better results with the C++ sample but there will always be a lack of data in the areas where the FOVs do not overlap.

    Regards,
    Aznie
    Intel RealSense Customer Support
    0
    Comment actions Permalink

Please sign in to leave a comment.