How to keep the pixel size of the target unchanged?
I used the realsense D435 camera to capture an object from far to near and obtained a 30 second bag data. I have stored the bag data in the form of RGB images and depth images in a local folder. However, the size of the objects in these images is not uniform in terms of pixel size due to the distance captured by the camera.
Now, what I want to do is: how to use existing RGB, depth images, and camera intrinsic parameters to map RGB images of different pixel sizes to the same size.
One of the ideas is to generate point cloud data from RGB and depth, then fix the depth distance and project it onto the RGB pixel coordinate system. However, when I used Open3D to implement this feature, the generated point cloud was a bit rough, resulting in the RGB image being mapped with many stripes.
I tried using the realsense library code for this. However, I am not familiar with the mechanism of self. pc. calculate (depthwrame) because self. pc. calculate can only work in pipeline, and now I only have RGB, depth images, and camera parameters.
Any assistance would be greatly appreciated
-
Hi 3526810272,
Thanks for reaching out to Intel® RealSense™ Technical Support.
Here are the references for you to implement the
self.pc.calculate()
functions in different languages:- C++: rs-pointcloud Sample
- Point Cloud Library (PCL): rs-pcl-color Sample
- Python: opencv_pointcloud_viewer.py
Regards,
Wai Fook
Intel Customer Support
Please sign in to leave a comment.
Comments
1 comment