Merging two point clouds taken from two D435 cameras
In the program, I was trying to merge point clouds taken from two D435 cameras.
But there is miss alignment in the merged point cloud.
In the code,
the program uses the Charuco board to find a transfer matrix for each camera to the world coordinate.
cv::aruco::detectMarkers() is called first to detect corners and markers.
cv::aruco::interpolateCornersCharuco() is called to interpolate corners and markers.
cv::aruco::estimatePoseCharucoBoard() is called to get rotation vector and translation vector, which are all 3x1 vectors.
cv::Rodrigues() is used to find rotation matrix 3x3. Then combined with the translation vector to get the transfer matrix 4x4.
RS2_STREAM_DEPTH is aligned to RS2_STREAM_COLOR for each camera using the align.process() function.
First, I thought it's the viewport issue since RGB image and depth image are taken from different viewports.
So I used the function get_extrinsics_to() to get extrinsic from RS2_STREAM_DEPTH to RS2_STREAM_COLOR or STREAM_COLOR to STREAM_DEPTH.
And convert extrinsics.rotation and extrinsics.translation to a 4x4 matrix.
I multiply the transfer matrix with the extrinsic matrix together to test.
But miss alignment still occurs,
Can someone help me, please?
-
Hi Amosz When trying to combine RealSense point clouds into a single cloud, a common recommendation is to perform an affine transform. Basically, just rotate and move the point clouds, in 3D space, and then once you've done that, you append the point clouds together.
In the RealSense SDK, the instruction rs2_transform_point_to_point is an affine transform. The link below provides information about implementing it.
https://github.com/IntelRealSense/librealsense/issues/5583#issuecomment-570805578
-
Hello MartyG,
Thanks for the reply.
I did affine transform in the program.
Similar to your description, I rotate and translate each point cloud and add them together in the end.
But I didn't use rs2_transform_point_to_point() function, I used a function from PCL - pcl::transfromPointCloud(),
which takes a intput point cloud, a output transformed point cloud and a 4x4 transform matrix.
-
Before proceeding further, may I ask whether PCL and Charuco are vital to your project. I ask because markerless point cloud stitching with 2 cameras observing from different angles can be achieved using ROS, as described in the Intel guide in the link below.
https://github.com/IntelRealSense/realsense-ros/wiki/Showcase-of-using-2-cameras
-
PCL has the Normal Distributions Transform method of point cloud stitching.
https://pointclouds.org/documentation/tutorials/normal_distributions_transform.html
At a more advanced level, the CONIX Research Center at Carnegie Mellon developed a point cloud stitching system that could stitch clouds from up to 20 RealSense 400 Series cameras over an ethernet network.
Please sign in to leave a comment.
Comments
6 comments