Can I assemble a better than the D455?
Hello,
We have been successfully using the D455 camera for long range (7 meters) applications and it performs really well.
Nonetheless, I have been thinking for a while if it would be possible to connect (sync) two D455 cameras and set them up on a rigid support about 1 meter away from each other in order to improve the distance between sensors to about 1 meter in order to improve the depth error even further. Moreover, these two D455 may be able to produce a more complete 3D representation of objects.
As a reference, the D455 has a distance between the depth sensors of 95 mm and a depth error of less than 2% at 4 m.
Any thoughts are welcome!
Thank you!
Mauricio
-
Hi Mau If your intention is to capture data from two cameras and combine their individual viewpoints from different locations and perspectives into a single one then that can certainly be achieved. For example, if you generate a point cloud from each camera then the two clouds can be 'stitched' together and an affine transform process then be applied to position and rotate the clouds into a common 3D space. In this way, a 360 degree scan can be produced.
The RealSense volumetric capture article linked to below provides an excellent introduction to the concept of merging multiple cameras into a combined final image.
https://www.intelrealsense.com/intel-realsense-volumetric-capture/
-
Thank you for the prompt response Marty!
Also thank you for the article, it is definitely useful.
I am also interested on increasing the distance between depth sensors to more than 95mm.
Do you know if it is viable to use two D455 to work as one stereo camera with an increased depth accuracy?
I was thinking of mounting them 1 meter apart at least.
-
The baseline distance of RealSense cameras is fixed. You can though place two or more cameras in a horizontal row or a vertical stack and combine their individual overlapping fields of view (FOV) into a single larger one.
This may provide an improved depth image by increasing the density of analyzable dots cast into a particular area by the projectors of the cameras and also incorporate redundancy into the depth data by having more than one camera observe the same area where the FOVs overlap.
Furthermore, the more cameras you use then the fewer blind-spots there are in the observed scene. For example, you may be able to create 360 degree coverage of a scene with a circle of six inward-facing cameras, but more would be better.
You can also add physical optical filter products over the lenses on the outside of the camera to further customize the camera to your project's particular needs. Intel have published a white-paper document on this subject.
https://dev.intelrealsense.com/docs/optical-filters-for-intel-realsense-depth-cameras-d400
There are ways in which the depth map can be enhanced through software settings too. An example of this is Intel's white-paper about improving depth on drones.
https://dev.intelrealsense.com/docs/depth-map-improvements-for-stereo-based-depth-cameras-on-drones
Please sign in to leave a comment.
Comments
4 comments