Can the depth images of D435 be aligned with the RGBA images created by the deepstream?
Dear all,I am using realsense D435 camera as an input source of Deepstream.The RGBA images are stored in the NvbufSurface in the Deepstream.But,I can't obtain the depth images by deepstream codes.
So,I choose to use the realsense SDK like:
#include <librealsense2/rs.hpp>
// RealSense SDK
rs2::pipeline p;
p.start(); while (true) {
rs2::frameset frames = p.wait_for_frames();
rs2::frame depth = frames.get_depth_frame();
const uint16_t* depth_data = (const uint16_t*)depth.get_data(); }
However,the depth images are created by SDK while the RGBA images are created by the deepstream.My question is that is there any way to make the color and depth frame aligned?
-
Hello, at the link below Nvidia provide a guide to setting up a DeepStream application for capturing the depth and color of a RealSense D435.camera.
https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_3D_Depth_Camera.html
Alternatively, you may be able to stream depth in a way that DeepStream can capture without using RealSense SDK code if you build librealsense from source code with CMake with the flag -DFORCE_LIBUVC=OFF included in the CMake build instruction, as advised here:
https://github.com/IntelRealSense/librealsense/issues/6841#issuecomment-660859774
-
Hello,I wonder if I insist on using SDK,can I use the timestamp of deepstream to create the corresponding depth frame?
And if I use SDK directly under the deepstream code to obtain depth images,will there be a conflict?As I know,the camera will be the input source of deepstream and deepstream depends on v4l2 to open the camera to appear videos.
-
When the librealsense SDK is built in -DFORCE_LIBUVC=OFF mode (also known as a 'native backend' or 'V4L2 backend' build), V4L2 is the default backend of the SDK.
Depth frames are generated by the RealSense camera hardware from raw left and right infrared frames, so it would not be practical to create your own depth frame in DeepStream.
-
As long as you are not running a RealSense program that is accessing the streams at the same time as DeepStream then DeepStream should be able to make use of them. Once a program starts accessing a particular stream first then that stream can not be used by a second application until the first application releases its claim on that stream (e.g by stopping the stream or closing the application down). In other words, access to streams is on a 'first come, first served' basis.
There rules about which streams can be accessed and how are governed by the RealSense SDK's Multi-Streaming Model.
By default, RealSense cameras use the /video0 channel for depth, /video1 for infrared and /video2 for RGB. The /video numbers may change though depending on what other devices are attached to the computer. It is possible to fix the /video addresses using symbolic links with the udev device handling rules.
-
Thanks for reply!I want to describe my situation clearly.
To begin with, I use /video12 which is RGB stream in my device as the input source of deepstream.
At the same time, if I use realsense's SDK like:
rs2::frameset frames = p.wait_for_frames();
rs2::frame depth = frames.get_depth_frame();to access depth images in the same deepstream codes.
So these code snippets are also running a program,right? So,it is impractical to do this while I am running a deepstream program using RGB stream?
Sorry,I have check your comment again,so RGB stream and depth stream is different stream and it's OK for me to access depth stream in the same time while I am running deepstream using RGB stream?Does that mean if I use the codes I have mentioned at first to obtain depth images, there will be no conflict for the camera itself?
-
Thank you,so there is still the problem,how can I make them aligned?The RGB stream is in deepstream to cope with,but the depth images are created by the realsen's SDK.
Can I just use the timestamp to make them aligned?If possible,for the SDK,is it practical to create a depth frame according to the timestamp?
-
It is possible to synchronize a RealSense depth stream with an RGB stream from an external source using the RealSense SDK's TIME_OF_ARRIVAL metadata timestamp.
https://github.com/IntelRealSense/librealsense/issues/2186#issuecomment-412785130
-
A RealSense user of DeepStream at the link below was facing a similar situation where they started RGB in DeepStream and depth in the RealSense SDK and wanted to know how to activate them together. A solution was not found in that particular case though.
https://forums.developer.nvidia.com/t/deepstream-yolo-with-realsense/
I thought carefully about your situation. I could not think of a solution that would be guaranteed to work each time though, only possibilities that would be vulnerable to failure.
-
A simpler approach is to use pre-made build scripts provided by the JetsonHacks website. You can install using the same Jetson packages as the ones on the installation_jetson.md page by running their installLibrealsense.sh script or install from source code with the buildLibrealsense.sh script.
https://github.com/JetsonHacksNano/installLibrealsense
-
Sorry to bother again,I encounter a problem about the camera when I use deepstream.
As you know that,the camera's RGB source is the input stream of the deepstream for object detection.
However,I find the camera's images are on the auto exposure mode,is there any way to change this since there seems no codes in deepstream to change the camera's parameters.
-
DeepStream apparently has an aelock auto-exposure locking mechanism that needs to be disabled.
Please sign in to leave a comment.
Comments
22 comments