The T265 and depth mapping
Hi,
I'm looking at SLAM methods for ascertaining position and at the same time want to create a world depth view. It seems to me that a combination of visual odometry (offered by the T265) and a depth sensor should be sufficient to do this. However, I am led to believe that the T265 cannot produce a depth map and the suggestion is to use a D400 series device in tandem to get depth. This seems a little odd to me because I assume the T265 picks points to match between the stereo cameras and must therefore know the depth of at least those points. Also, other devices on the market (like the StereoLabs ZED Mini) appear to be able to do both with just two cameras (and no further sensors).
Are you able to discuss why the T265 cannot provide a depth map for the purpose of generating a world map.
Incidentally, I note in the T265 documentation, it is suggested that the T265 provides SLAM functionality - In my mind, this is not correct as although it provides 6-DoF position, it does not have the capability to map and store the environment (unless I am wrong?).
Thanks for any comments and/or pointers,
Dave
-
Hi Dave,
T265 is not a depth camera and the quality of passive-only depth options will always be limited compared to (e.g.) the D4XX series cameras. However, T265 does have two global shutter cameras in a stereo configuration.
Here is an example (t265_stereo.py) shows how to use T265 intrinsics and extrinsics in OpenCV to asynchronously compute depth maps from T265 fisheye images on the host.
The Intel RealSense Tracking Camera performs inside-out tracking, meaning that it does not depend on external sensors for its understanding of the environment. The tracking is based primarily on information gathered from two onboard fish-eye cameras, each with approximately a 163-degree range of view (±5 degrees) and performing image capture at 30 frames per second. The wide field of view from each camera sensor helps keep points of reference visible to the system for a relatively long time, even if the platform is moving quickly through space.
A key strength of visual-inertial odometry is that the various sensors available complement each other. The images from the visual sensors are supplemented by data from an onboard inertial measurement unit (IMU), which includes a gyroscope and accelerometer. The aggregated data from these sensors is fed into simultaneous localization and mapping (SLAM) algorithms running on the Intel Movidius Myriad 2 VPU for visual-inertial odometry.
Regards,
Yu-Chern
Please sign in to leave a comment.
Comments
1 comment