Realsense D435 incorrect distance measurement for certain object
Hello,
I am using RS-D435I as a v-slam main sensor for an AGV, which applied in a factory surrounding.
And the problem appeared, D435I cannot measure the distance of the railway of the crane in our factory correctly. And this problem only appeared on the crane-railway this very object.
the quite far away crane-railway will be mistaken as a nearby object by the sensor, thus the traverse discrimination of our program will be triggered.
you may see the phenomenon from the following pictures, which include the measure-result showed from rs-viewer under Ubuntu16.04 system and the reality crane, the mistaken parts are in the red-box.
I tried using the “High Accuracy” Mode in the preset or set a pretty high value(over 800) in the "Advanced Controls->Depth Control->DS Second Peak Threshold in the "default"/"High Density" mode. Both way can reduce the area of the mistaken-part, but D435i still randomly return the wrong distance value of this crane-railway in certain posture.
So my following questions are:
a. Is this a common Problem for the D435I (cannot get the correct depth-value of some surfaces with periodic textures )?
b. Is there any parameter, I can use to fix this problem?
c. Is there any way I can make some filter or preprocess to the IR frame input before it be used to deal the stero-processing?(even I dont think this part is opened for the user)
d. Maybe some suggestion for me to modify my traverse discrimination. which now is judging if there is some large object in the space of 2 meters long ,1.4 meters wide and 1.6 meters high in front of the camera continuously for 100 mseconds.
Thanks for the answer, I am pleasure for everyone shared the experience about similar problem.
with best regards,
Changchen XIANG
-
The reading of depth from repetitive patterns is a problem common to all the 400 Series stereo depth cameras. The subject was discussed in a different case yesterday, where the RealSense SDK Manager offers advice about compensating for it (please read downward through the comments from the point that I linked to).
https://github.com/IntelRealSense/librealsense/issues/6713#issuecomment-651114720
I hope that the comments, and the papers that the SDK Manager provides links to, will cover all of the questions A-D that you asked above.
-
Hi Marty,
I placed two objects of different size at different known distances facing the center of the camera and compared with the depth distance obtained from the python wrapper.
My problem is how do I measure the distance of objects which is placed away from the camera axis.
I am attaching an image, method used to measure the distance manually.
Can u please tell me i am doing the right thing.
-
This seems like the kind of application that would suit a Deep Neural Network (DNN) object recognition application, which can identify multiple objects simultaneously, even if they are not in the camera center.
The link below has links to a list of such programs for Python,
https://github.com/IntelRealSense/librealsense/issues/3086#issuecomment-455181310
The programs in that list seem to return the % confidence (0 to 100) of correct recognition of the object though, rather than their distance. One of the RealSense-compatible DNN examples in the RealSense SDK that returns detected depth of multiple objects may be of more use.
This one is based on C++ and the OpenVINO Toolkit:
https://github.com/IntelRealSense/librealsense/tree/master/wrappers/openvino/dnn
Whilst this one is based on C++ and OpenCV:
Please sign in to leave a comment.
Comments
4 comments