I am currently part of a team developing an IoT device with an Intel RealSense D410 (sometimes D415) connected to an SBC running Yocto/Linux. In our setup we would like to calibrate the device when everything is assembled. This means that we are currently developing a python script with calibration for the D410 using OpenCV. As a result of this we have been looking into how we can transfer the calibration values into the D410, preferably without using the librscalibrationapi since that would be an additional package we would have to include and maintain in our Yocto build.
1. Will the tm2-class set_extrinsics and set_intrinsics functions (or similar) be available for the D410 in the near future?
2. Is there a way for us to translate OpenCV calibration values into the D410 calibration table?
3. Are we missing something and/or overthinking this?
What we have found so far is that if it was a "tm2"-device (tm2 references the pyrealsense2 class https://intelrealsense.github.io/librealsense/python_docs/_generated/pyrealsense2.tm2.html#pyrealsense2.tm2) we could use the set_extrinsics and set_intrinsics functions. The first question is therefore will similar functions be available for the D410 (which we believe is an "auto_calibrated_device"-device) at some point in the near future?
A second observation we have made is that it is possible to read out and write back a "calibration table" (using get_calibration_table, set_calibration_table, and write_calibration - https://intelrealsense.github.io/librealsense/python_docs/_generated/pyrealsense2.auto_calibrated_device.html#pyrealsense2.auto_calibrated_device). This table is also available from the Intel.Realsense.CustomRW-tool and seems to be meant for use when using the auto-calibration features of the D410. The second question is therefore is there a way for us to translate the calibration values from OpenCV (focal lengths, principal points, and distortions) into the D410 calibration table?
Third point is simply are we missing something? We have been looking into this to simplify the assembly/production process of the device. We are still looking into the final process (how many images/positions, the physical setup, etc.) but maybe we are looking at this in a completely wrong way?
Hopefully you can help us.
Please sign in to leave a comment.