View my account

Ground truth accuracy problem

Comments

29 comments

  • MartyG

    With RealSense 400 Series cameras, the error in depth measurements increases linearly, starting at around zero at the camera position and increasing as the observed object / surface becomes further from the camera.  This is called RMS Error.  It is discussed in Point 5 of the section of Intel's camera tuning paper linked to below.

    https://dev.intelrealsense.com/docs/tuning-depth-cameras-for-best-performance#section-verify-performance-regularly-on-a-flat-wall-or-target

     

    You may be able to reduce RMS error on your D435i if the scene that the camera is observing is well illuminated, as you could then disable the camera's projector which casts an infrared dot pattern onto the scene to aid depth analysis but can increase noise on the image.  If the lighting in the scene is good then the camera can alternatively use the light for depth analysis of objects / surfaces instead of the dot pattern projection.

    If switching to the RealSense D455 camera model is an option, it has twice the accuracy over distance of the D435i, meaning that at 6 meters distance the D455 has the same accuracy that the D435i does at 3 meters.

    0
    Comment actions Permalink
  • 241220

    Thank you for the quickly answer. Maybe I can explain better the problem, I don't have problems with the noise of the camera but the estimation of the mean plane is in a different position then the measured one.

    For example I have points, estimated from the camera, that measured from 197.4mm and 197.8mm but their real value is 200mm.

    I'm looking for fixing the difference from the estimated mean plane and the plane that I measured in the experimental setup.

    0
    Comment actions Permalink
  • MartyG

    A RealSense depth measurement will not be exactly the same as a real-world measurement taken with a tape measure.  Aside from RMS error factoring into the measurement, there may also be additional error caused by environmental and lighting factors in the location that the camera is used in or elements in the scene that confuse the depth algorithm or make a surface more difficult to read accurately.

     

    RealSense camera models with front glass such as the D435i also take their depth measurement from the front glass instead of at the camera lenses inside the camera.  On D435i, -4.2 (mm) should be added to depth measurements to find the ground truth measurement at the lenses.

    0
    Comment actions Permalink
  • 241220

    Thank you, I didn't know about that. This worsens my estimated error: 

    Based on what you've written my error is:

    513 - (499.76 - 4.2) = 17,44mm

    Could it be or it's too much?

    0
    Comment actions Permalink
  • MartyG

    17 mm error is possible if the camera is having difficulty reading depth detail from the scene.  Examples of disruptive elements could be reflections, dark grey / black colors of surfaces or confusion in the depth algorithm caused by horizontal or vertical arrangements of similar looking objects (such as a horizontal row of fence posts, a vertical stack of window blind slats or a tiled floor / ceiling), known as a repetitive pattern.  Fluorescent light sources such as ceiling strips lights could also cause noise.  There are methods to reduce the negative effects of these disruptive elements though.

    It would be helpful if you could provide an RGB image of the scene as this will help to diagnose potential elements in the scene that could be causing error in the depth image. 

    0
    Comment actions Permalink
  • 241220

     

    That's what I see in the depth quality tool. I have these results:

     

     

    And this is what I see from the rgb:

     

    I'm using the white wall with the projected pattern for testing

    0
    Comment actions Permalink
  • MartyG

    The depth image looks excellent.  The RGB image of the white wall looks more grey though, suggesting to me that the light illumination in that wall area might be dim.  Low lighting can change how visible the dot pattern is to the camera on some materials during different phases of the day.  For example, the pattern may be clearly visible to the camera in the morning but almost disappear later in the day as the natural lighting changes.

    The camera needs to be able to see the dots in order to use them as a texture source on a surface to analyze that surface for depth information.  If the pattern is not clearly visible then the camera may need to rely on strong ambient lighting instead to perform depth analysis of the surface. 

    0
    Comment actions Permalink
  • 241220

    I tried with a different light condition. Now I see the wall like that:

     

    it doesn't improve significantly the results of test in depth quality tool:

     

    I have to increase more the light or is there something else that I can do to solve the problem?

    0
    Comment actions Permalink
  • MartyG

    The increased lighting on the wall in the above image is likely to be sufficient for good quality depth analysis. 

    I went back to the pair of images at the top of this case and note that you achieve significantly better accuracy results when the camera is at 200 mm / 20 cm from the wall than when it is at around 500 mm / 50 cm. 

    Could you test whether increasing the Laser Power option from its default value of '156' to its maximum of '360' improves accuracy at 500 mm distance please, as maximum laser power should increase the visibility to the camera of the projected dot pattern on the wall.

     

    0
    Comment actions Permalink
  • 241220

    I have these Control Parameters (Laser power changed as you said)

     

    I got these results

     

     

    Little bit better but it isn't changing enough.

    I share you the other parameters in order to have a check:

    0
    Comment actions Permalink
  • MartyG

    How does it perform if you set Second Peak Threshold to '0' instead of the default '325'

    0
    Comment actions Permalink
  • 241220

    I made the changes and nothing happen

    0
    Comment actions Permalink
  • MartyG

    Technically the optimal depth accuracy resolution is 848x480 on the D435i, though depth precision increases as resolution increases.  So is there any positive change if you increase resolution to 1280x720

    0
    Comment actions Permalink
  • 241220

     

    I tried but results don't show any change.

    0
    Comment actions Permalink
  • 241220

    it could be an error in the focal length calibration?

     

    0
    Comment actions Permalink
  • MartyG

    Please provide an infrared image of the wall so that I can see how visible the dot pattern projection is on the wall surface at 500 mm range.

    0
    Comment actions Permalink
  • 241220

    that's what I see in the infrared image:

    0
    Comment actions Permalink
  • MartyG

    It looks as though the dot pattern is sufficiently visible to provide a texture source for the camera to analyze for depth information.

    At 500 mm range, you may get more accurate results if you set the ROI drop-down to 20% instead of the default 40%.  Does 20% work better for you?

     

    0
    Comment actions Permalink
  • MartyG

    A RealSense calibration specialist from Intel has read this case and will be posting expert advice for you on this discussion.

    0
    Comment actions Permalink
  • Sweetser, John N

    Hello,

    I can try to help with the accuracy issue you are reporting. Based on your observations, I suspect the underlying issue is related to the calibration of the camera. There are a few methods for resolving this but first a few comments/questions:

    - The various settings that you have been adjusting will affect different aspects of the depth quality but will have very little impact on the average Z values (or absolute accuracy).

    - Before attempting to make calibration adjustments it's important to confirm the magnitude of the errors you're observing. The ~2% error at ~500mm is based on a ground truth reference value (e.g., 513mm). Of course, the validity of the Z accuracy you determine will be limited by the accuracy of your GT. How was this GT measured and how confident are you that it's correct (to within 1-2mm)?

    - Assuming that the GT is sufficiently accurate and there is a >2% error at ~500mm (which is beyond expectation), there are a few approaches to improving this.

    1. The ideal method is to perform a complete re-calibration, equivalent to the original factory calibration. This "OEM" calibration requires a large target. If you have this, then that is the recommended approach since it should optimize the overall performance, including accuracy.
    2. Assuming you do not have OEM calibration capability, you can run a version of self-calibration that specifically addresses Z accuracy. It's called "Tare" and we can provide information on this if you need it.
    3. You may also run dynamic calibration using a GT target, which is another user calibration method.

    - You mention that you need ~0.2mm accuracy. This will be very difficult or impossible to achieve in practice for a few reasons (true GT is very difficult to obtain to better than ~1mm, the camera precision is generally no better than a few tenths of mm depending on operating conditions). If you're requirement is for this level of "relative" accuracy (depth differences between objects), this could be feasible.

    We will wait for your response and proceed from there.

     

    0
    Comment actions Permalink
  • 241220

    Hello,

    sorry for the late reply, I can use a relative measurement but I have a ground truth error increasing with distance, so I think I will have the same problem in relative accuracy unless I have 2 targets at the same distance.

    I don't have a target for an OEM calibration, can I have more information about Tare calibration?

    Could a new calibration be decisive in reducing the ground truth error in distance?

    0
    Comment actions Permalink
  • MartyG

    Further information about Tare can be found in Intel's self calibration white-paper guide at the links below.

    https://dev.intelrealsense.com/docs/self-calibration-for-depth-cameras#31-running-the-tare-routine

     

    https://dev.intelrealsense.com/docs/self-calibration-for-depth-cameras#addendum-a-march-2022-tare-calibration-with-ground-truth-target

     

    Within that guide, a target for Tare ground-truth calibration and focal length calibration that can be printed off can be found here:

    https://dev.intelrealsense.com/docs/self-calibration-for-depth-cameras#ground-truth-targets-and-printing--mounting-instructions

     

    Yes, a ground truth calibration can make a positive difference.  A RealSense team member with specialist knowledge of calibration provides ground truth advice at the following links:

    https://github.com/IntelRealSense/librealsense/issues/10213#issuecomment-1030589898

     

    https://support.intelrealsense.com/hc/en-us/community/posts/6285037530259/comments/6381786011667

    0
    Comment actions Permalink
  • 241220

    I'm trying to perform a Tare calibration but i got an error in the target searching for GT self estimation.

    This is what the camera is seeing:

    Why does the camera not find the target?

    0
    Comment actions Permalink
  • 241220

    Another question, in the Tare calibration I have to add the -4.2mm for the GT at the lenses?

    0
    Comment actions Permalink
  • MartyG

    You should not have to manually add a GT value when performing Tare calibration in the RealSense Viewer.  Point the camera at the printed target and left-click on the Get button to generate your own ground truth value for the scene that the camera is in instead of using the default '1200'.

     

     

    In the RealSense calibration specialist's advice, they state: "The default value of 1200 is arbitrary.  In normal usage, you should use a value that you know is correct (or as close as possible to true distance).  The Tare process will then attempt to modify the calibration so that the measured Z equals the entered GT Z.  If these values are very different, it may fail to converge".

     

    On the next panel that appears after clicking Get, center the yellow box on the target and click the Calibrate button.

     

    0
    Comment actions Permalink
  • 241220

    Despite I pointed the camera in front of the target and inside the yellow rectangle, the process of recognize the distance failed.

    I think that the algorithm didn't recognize the target.

    0
    Comment actions Permalink
  • MartyG

    I ran extensive tests with the target image.  I found that the detection was most likely to fail if the target was not absolutely centered in the yellow box right from the start of when the calibration scan begins, or if the camera is moved slightly after the scan begins so that the dots move towards the edge of the box.

     

    If the camera is being held in the hand, I found that the best technique was to center the box on the dots after clicking the 'Get' button.  This gives you the opportunity to center the target in the box before scanning begins.  When the dots are centered, hold the camera absolutely still whilst clicking the 'Calculate' button with your other hand to start the scan and do not move the hand that is holding the camera at all until the scan is completed.

     

    As testing progressed during late afternoon, I found that it was harder to achieve a successful result as the natural lighting level in the room fell over time.

    0
    Comment actions Permalink
  • Sweetser, John N

    A few suggestions on running the tare function.

    There are really two separate functions involved and for simplicity, it's best to separate them:

    1. Determining proper GT,

    2. Performing the tare calibration procedure.

    I suspect the problem you're having with 1 is related, at least in part, to the projector pattern overlaid onto the GT target. This prevents the reference marks from being detected. The projector should turn off when running the Get GT function, but there may be a bug. I recommend simply manually turning it off for now. In addition, the target does need to be well aligned with the predefined box. The WP shows a few examples of acceptable arrangements. Lighting is another potential problem and this can be addressed with some additional exposure control if needed. 

    To avoid the above potential problems, at least initially, you can Tare in the traditional manner by entering an independently determined GT value. You can use the same method you used to determine the error with the DQT. This will test the Tare process itself. After that, we can make sure the target-based GT part is working properly.

    The standard tare operation is described in the referenced WP, but let us know if you have more questions or problems with it.

     

    0
    Comment actions Permalink
  • Sweetser, John N

    A note regarding the ~4mm offset:

    If you are independently determining GT and entering it manually, then you need to include this offset. Once the target-based GT function is working, that offset is automatically accounted for and you do not need to adjust for it.

    0
    Comment actions Permalink

Please sign in to leave a comment.