Nvidia Jetson: Writing rgb data directly to GPU for Neural Network processing while keeping depth data on cpu
Hello everyone.
I am currently working with an Nvidia Jetson TX2 together with a RealSense D435i. I want to use the RealSense SDK to process RGB frames on GPU, while processing the corresponding depth frames on CPU.
Is there a suggested way of doing this? I thought about reading frames with the pipeline object and then obtaining a pointer using get_data(), but I would like to avoid copying frames around, if possible.
The jetson-inference package uses a GStreamer pipeline to read frames from the UVC interface, which works for GPU processing in my setup, but then I don't know how I could retrieve the corresponding depth frames.
Did anyone experiment with a similar setup?
Regards
-
Hi Privatefebo98 There is a RealSense plugin for GStreamer at the link below that provides depth and color frames.
https://github.com/WKDSMRT/realsense-gstreamer
If CUDA support in the RealSense SDK is enabled then the graphics GPU can be used to perform YUY2 to RGB color conversion instead of doing so on the CPU. CUDA support also accelerates pointcloud generation and depth-color alignment by offloading the work from the CPU to GPU.
CUDA support is enabled automatically if the SDK is built from packages on Jetson, or if the CMake build flag -FORCE_RSUSB_BACKEND=true is included in the CMake build instruction when compiling the SDK from source code.
https://github.com/IntelRealSense/librealsense/blob/master/doc/installation_jetson.md
Please sign in to leave a comment.
Comments
1 comment