I am currently working with an Nvidia Jetson TX2 together with a RealSense D435i. I want to use the RealSense SDK to process RGB frames on GPU, while processing the corresponding depth frames on CPU.
Is there a suggested way of doing this? I thought about reading frames with the pipeline object and then obtaining a pointer using get_data(), but I would like to avoid copying frames around, if possible.
The jetson-inference package uses a GStreamer pipeline to read frames from the UVC interface, which works for GPU processing in my setup, but then I don't know how I could retrieve the corresponding depth frames.
Did anyone experiment with a similar setup?
Please sign in to leave a comment.