I am trying to build a 3D point cloud using the L515 camera.
Is there a way to get accelerator's data directly so I'll be able to tell the position and rotation of the camera between 2 frames ? (that way i'll be able to build all frames in one axis system and build a 3D model)
I didn't find something for that in librealsense ...
I think there is an example for extracting the IMU data you can refer to.
Here is page for examples for the depth camera provided by RealSense SDK.
Sorry about that.
You can get accelerator's data directly but there is no position data as this would have to be derived from the raw data.
The IMU in the L515 provides gyro and acceleration data but not pose data. The only camera that provides pose data is the T265. If you want pose data from the L515, you will have to calculate it on the host (maybe by double integrating the acceleration but there is the gravitational component of the acceleration that you may need to subtract or filter out).
Thank you for your answer.
How accurate will be the calculation of the pose from the raw data ?
There is a small inaccuracy of the accelerator component, double integrating it will make the inaccuracy bigger
And all available data from the component it a function of time, so the longer the time the bigger the inaccuracy
Am I missing something ?
I could not estimate the accuracy.
Yes you are correct, the inaccuracy would be problematic
There is an example of a similar implementation of your goal but it requires addition of a T265 camera.
There is no readily available feature on the L515 to tell the position of the camera between 2 frames.
Please sign in to leave a comment.