Ground truth accuracy problem
Good morning,
I'm developing an high accuracy application with Intel RealSense 435i. In the system (camera + ICP algorithm), I need to reach an accuracy of 0.2/0.3mm.
The system shows very good repeatability but I notice a problem about the ground truth estimated from the camera.
As shown from the images, the disparity between the ground truth and the distance estimated increases with distance from the wall.
Have you got any idea on how to fix this problem or from where the error comes from?
-
With RealSense 400 Series cameras, the error in depth measurements increases linearly, starting at around zero at the camera position and increasing as the observed object / surface becomes further from the camera. This is called RMS Error. It is discussed in Point 5 of the section of Intel's camera tuning paper linked to below.
You may be able to reduce RMS error on your D435i if the scene that the camera is observing is well illuminated, as you could then disable the camera's projector which casts an infrared dot pattern onto the scene to aid depth analysis but can increase noise on the image. If the lighting in the scene is good then the camera can alternatively use the light for depth analysis of objects / surfaces instead of the dot pattern projection.
If switching to the RealSense D455 camera model is an option, it has twice the accuracy over distance of the D435i, meaning that at 6 meters distance the D455 has the same accuracy that the D435i does at 3 meters.
-
Thank you for the quickly answer. Maybe I can explain better the problem, I don't have problems with the noise of the camera but the estimation of the mean plane is in a different position then the measured one.
For example I have points, estimated from the camera, that measured from 197.4mm and 197.8mm but their real value is 200mm.
I'm looking for fixing the difference from the estimated mean plane and the plane that I measured in the experimental setup.
-
A RealSense depth measurement will not be exactly the same as a real-world measurement taken with a tape measure. Aside from RMS error factoring into the measurement, there may also be additional error caused by environmental and lighting factors in the location that the camera is used in or elements in the scene that confuse the depth algorithm or make a surface more difficult to read accurately.
RealSense camera models with front glass such as the D435i also take their depth measurement from the front glass instead of at the camera lenses inside the camera. On D435i, -4.2 (mm) should be added to depth measurements to find the ground truth measurement at the lenses.
-
17 mm error is possible if the camera is having difficulty reading depth detail from the scene. Examples of disruptive elements could be reflections, dark grey / black colors of surfaces or confusion in the depth algorithm caused by horizontal or vertical arrangements of similar looking objects (such as a horizontal row of fence posts, a vertical stack of window blind slats or a tiled floor / ceiling), known as a repetitive pattern. Fluorescent light sources such as ceiling strips lights could also cause noise. There are methods to reduce the negative effects of these disruptive elements though.
It would be helpful if you could provide an RGB image of the scene as this will help to diagnose potential elements in the scene that could be causing error in the depth image.
-
The depth image looks excellent. The RGB image of the white wall looks more grey though, suggesting to me that the light illumination in that wall area might be dim. Low lighting can change how visible the dot pattern is to the camera on some materials during different phases of the day. For example, the pattern may be clearly visible to the camera in the morning but almost disappear later in the day as the natural lighting changes.
The camera needs to be able to see the dots in order to use them as a texture source on a surface to analyze that surface for depth information. If the pattern is not clearly visible then the camera may need to rely on strong ambient lighting instead to perform depth analysis of the surface.
-
The increased lighting on the wall in the above image is likely to be sufficient for good quality depth analysis.
I went back to the pair of images at the top of this case and note that you achieve significantly better accuracy results when the camera is at 200 mm / 20 cm from the wall than when it is at around 500 mm / 50 cm.
Could you test whether increasing the Laser Power option from its default value of '156' to its maximum of '360' improves accuracy at 500 mm distance please, as maximum laser power should increase the visibility to the camera of the projected dot pattern on the wall.
-
Hello,
I can try to help with the accuracy issue you are reporting. Based on your observations, I suspect the underlying issue is related to the calibration of the camera. There are a few methods for resolving this but first a few comments/questions:
- The various settings that you have been adjusting will affect different aspects of the depth quality but will have very little impact on the average Z values (or absolute accuracy).
- Before attempting to make calibration adjustments it's important to confirm the magnitude of the errors you're observing. The ~2% error at ~500mm is based on a ground truth reference value (e.g., 513mm). Of course, the validity of the Z accuracy you determine will be limited by the accuracy of your GT. How was this GT measured and how confident are you that it's correct (to within 1-2mm)?
- Assuming that the GT is sufficiently accurate and there is a >2% error at ~500mm (which is beyond expectation), there are a few approaches to improving this.
- The ideal method is to perform a complete re-calibration, equivalent to the original factory calibration. This "OEM" calibration requires a large target. If you have this, then that is the recommended approach since it should optimize the overall performance, including accuracy.
- Assuming you do not have OEM calibration capability, you can run a version of self-calibration that specifically addresses Z accuracy. It's called "Tare" and we can provide information on this if you need it.
- You may also run dynamic calibration using a GT target, which is another user calibration method.
- You mention that you need ~0.2mm accuracy. This will be very difficult or impossible to achieve in practice for a few reasons (true GT is very difficult to obtain to better than ~1mm, the camera precision is generally no better than a few tenths of mm depending on operating conditions). If you're requirement is for this level of "relative" accuracy (depth differences between objects), this could be feasible.
We will wait for your response and proceed from there.
-
-
Hello,
sorry for the late reply, I can use a relative measurement but I have a ground truth error increasing with distance, so I think I will have the same problem in relative accuracy unless I have 2 targets at the same distance.
I don't have a target for an OEM calibration, can I have more information about Tare calibration?
Could a new calibration be decisive in reducing the ground truth error in distance?
-
Further information about Tare can be found in Intel's self calibration white-paper guide at the links below.
https://dev.intelrealsense.com/docs/self-calibration-for-depth-cameras#31-running-the-tare-routine
Within that guide, a target for Tare ground-truth calibration and focal length calibration that can be printed off can be found here:
Yes, a ground truth calibration can make a positive difference. A RealSense team member with specialist knowledge of calibration provides ground truth advice at the following links:
https://github.com/IntelRealSense/librealsense/issues/10213#issuecomment-1030589898
https://support.intelrealsense.com/hc/en-us/community/posts/6285037530259/comments/6381786011667
-
You should not have to manually add a GT value when performing Tare calibration in the RealSense Viewer. Point the camera at the printed target and left-click on the Get button to generate your own ground truth value for the scene that the camera is in instead of using the default '1200'.
In the RealSense calibration specialist's advice, they state: "The default value of 1200 is arbitrary. In normal usage, you should use a value that you know is correct (or as close as possible to true distance). The Tare process will then attempt to modify the calibration so that the measured Z equals the entered GT Z. If these values are very different, it may fail to converge".
On the next panel that appears after clicking Get, center the yellow box on the target and click the Calibrate button.
-
I ran extensive tests with the target image. I found that the detection was most likely to fail if the target was not absolutely centered in the yellow box right from the start of when the calibration scan begins, or if the camera is moved slightly after the scan begins so that the dots move towards the edge of the box.
If the camera is being held in the hand, I found that the best technique was to center the box on the dots after clicking the 'Get' button. This gives you the opportunity to center the target in the box before scanning begins. When the dots are centered, hold the camera absolutely still whilst clicking the 'Calculate' button with your other hand to start the scan and do not move the hand that is holding the camera at all until the scan is completed.
As testing progressed during late afternoon, I found that it was harder to achieve a successful result as the natural lighting level in the room fell over time.
-
A few suggestions on running the tare function.
There are really two separate functions involved and for simplicity, it's best to separate them:
1. Determining proper GT,
2. Performing the tare calibration procedure.
I suspect the problem you're having with 1 is related, at least in part, to the projector pattern overlaid onto the GT target. This prevents the reference marks from being detected. The projector should turn off when running the Get GT function, but there may be a bug. I recommend simply manually turning it off for now. In addition, the target does need to be well aligned with the predefined box. The WP shows a few examples of acceptable arrangements. Lighting is another potential problem and this can be addressed with some additional exposure control if needed.
To avoid the above potential problems, at least initially, you can Tare in the traditional manner by entering an independently determined GT value. You can use the same method you used to determine the error with the DQT. This will test the Tare process itself. After that, we can make sure the target-based GT part is working properly.
The standard tare operation is described in the referenced WP, but let us know if you have more questions or problems with it.
Please sign in to leave a comment.
Comments
29 comments