On-Chip Un-distortion of RGB or On-PC?
Hi,
To have a better understanding of computational demand of Realsense Dxx series, I need to know if RGB un-distortion (warping of RGB) happens on device or on the receiver end (i.e. Realsense SDK doing the un-distortion using the calibration data stored in the device, but still on the CPU/GPU of host)?
For depth, I believe depth it is done on-chip, but would be nice if someone can confirm that one too (if depth/IR frames is undistorted on chip or on PC)?
[Edit: I am specifically using Realsense D455]
Thank you.
-
Hi Pouryah Most of the streams on RealSense 400 Series cameras are rectified and have a distortion model applied to them. The 'Y16' infrared stream format is unrectified though.
The rectification is calculated and applied in the camera hardware by a chip on a circuit board called Vision Processor D4. The link below leads to a PDF document describing the features of the Vision Processor D4.
Because the Vision Processor D4 hardware inside the camera performs processing that would otherwise have to be done on the computer, it means that RealSense 400 Series cameras can work on low-power computing devices with a humble specification and a low-end GPU.
When post-processing filters are applied to camera data, these are calculated on the computer's CPU and so can impose a processing burden.
It is possible to offload some of the processing burden from the CPU onto a graphics GPU by using a method called GLSL that can work with any brand of GPU, or CUDA that works only with Nvidia GPUs such as the one on Nvidia Jetson computing boards.
Please sign in to leave a comment.
Comments
1 comment