View my account

Can the depth images of D435 be aligned with the RGBA images created by the deepstream?

Comments

22 comments

  • MartyG

    Hello, at the link below Nvidia provide a guide to setting up a DeepStream application for capturing the depth and color of a RealSense D435.camera.

    https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_3D_Depth_Camera.html

     

    Alternatively, you may be able to stream depth in a way that DeepStream can capture without using RealSense SDK code if you build librealsense from source code with CMake with the flag -DFORCE_LIBUVC=OFF included in the CMake build instruction, as advised here:

    https://github.com/IntelRealSense/librealsense/issues/6841#issuecomment-660859774

     

    0
    Comment actions Permalink
  • 1229304629

    Hello,I wonder if I insist on using SDK,can I use the timestamp of deepstream to create the corresponding depth frame?

    And if I use SDK directly under the deepstream code to obtain depth images,will there be a conflict?As I know,the camera will be the input source of deepstream and deepstream depends on v4l2 to open the camera to appear videos.

    0
    Comment actions Permalink
  • MartyG

    When the librealsense SDK is built in -DFORCE_LIBUVC=OFF mode (also known as a 'native backend' or 'V4L2 backend' build), V4L2 is the default backend of the SDK.

     

    Depth frames are generated by the RealSense camera hardware from raw left and right infrared frames, so it would not be practical to create your own depth frame in DeepStream.

    0
    Comment actions Permalink
  • 1229304629

    So,you mean when I use SDK to create depth images in deepstream,there must be a conflict for V4L2 to decide which source to open thus leading to a crash?

    0
    Comment actions Permalink
  • MartyG

    As long as you are not running a RealSense program that is accessing the streams at the same time as DeepStream then DeepStream should be able to make use of them.  Once a program starts accessing a particular stream first then that stream can not be used by a second application until the first application releases its claim on that stream (e.g by stopping the stream or closing the application down).  In other words, access to streams is on a 'first come, first served' basis.

     

    There rules about which streams can be accessed and how are governed by the RealSense SDK's Multi-Streaming Model.

    https://github.com/IntelRealSense/librealsense/blob/master/doc/rs400_support.md#multi-streaming-model

     

    By default, RealSense cameras use the /video0 channel for depth, /video1 for infrared and /video2 for RGB.  The /video numbers may change though depending on what other devices are attached to the computer.  It is possible to fix the /video addresses using symbolic links with the udev device handling rules.

    https://github.com/IntelRealSense/librealsense/issues/11553

    0
    Comment actions Permalink
  • 1229304629

    Thanks for reply!I want to describe my situation clearly.

    To begin with, I use /video12 which is RGB stream in my device as the input source of deepstream.

    At the same time, if I use realsense's SDK like:

    rs2::frameset frames = p.wait_for_frames(); 
    rs2::frame depth = frames.get_depth_frame();

    to access depth images in the same deepstream codes.

    So these code snippets are also running a program,right? So,it is impractical to do this while I am running a deepstream program using RGB stream?

    Sorry,I have check your comment again,so RGB stream and depth stream is different stream and it's OK for me to access depth stream in the same time while I am running deepstream using RGB stream?Does that mean if I use the codes I have mentioned at first to obtain depth images, there will be no conflict for the camera itself?

    0
    Comment actions Permalink
  • MartyG

    Access claims are specific to a stream type.  So if DeepStream accesses RGB only and a RealSense script is only accessing the depth stream then there should not be an access conflict. 

    0
    Comment actions Permalink
  • 1229304629

    Thank you,so there is still the problem,how can I make them aligned?The RGB stream is in deepstream to cope with,but the depth images are created by the realsen's SDK.

    Can I just use the timestamp to make them aligned?If possible,for the SDK,is it practical to create a depth frame according to the timestamp?

    0
    Comment actions Permalink
  • MartyG

    It is possible to synchronize a RealSense depth stream with an RGB stream from an external source using the RealSense SDK's TIME_OF_ARRIVAL metadata timestamp.

    https://github.com/IntelRealSense/librealsense/issues/2186#issuecomment-412785130

    0
    Comment actions Permalink
  • 1229304629

    Thanks,do you mean I can use the TIME_OF_ARRIVAL metadata to get the timestamp of the depth frame, and then I can compare it with the timestamp of deepstream's RGB frame?

     

    0
    Comment actions Permalink
  • MartyG

    Yes, I believe that is what the advice is referring to.

    0
    Comment actions Permalink
  • 1229304629

    Thanks,so the RGB stream and depth stream should better be opened at the same time,right? or there will be big latency between these two streams' timestamp.

    0
    Comment actions Permalink
  • MartyG

    Yes, ideally the two streams should be started at the same time is possible so that there is not a significant gap between their frame numbers.

    0
    Comment actions Permalink
  • 1229304629

    Thanks,but how can I promise this,because they are opened differently,one is by deepstream,another is by SDK.

    0
    Comment actions Permalink
  • MartyG

    A RealSense user of DeepStream at the link below was facing a similar situation where they started RGB in DeepStream and depth in the RealSense SDK and wanted to know how to activate them together.  A solution was not found in that particular case though.

    https://forums.developer.nvidia.com/t/deepstream-yolo-with-realsense/

     

    I thought carefully about your situation.  I could not think of a solution that would be guaranteed to work each time though, only possibilities that would be vulnerable to failure.

    0
    Comment actions Permalink
  • 1229304629

    Thanks anyway,if you have any ideas in the future,please feel free to tell me.

    0
    Comment actions Permalink
  • 1229304629

    Hi,sorry to bother after so long, I am trying to install the SDK in my jetson platform whose system is ubuntu 20.04 with NVIDIA Tegra Orin inside ,which instruction should I refer to ?  The distribution_linux.md    or    the installation_jetson.md?

    0
    Comment actions Permalink
  • MartyG

    Hi, it's no trouble at all.  Of those two instruction pages, installation_jetson.md should be used with a Jetson.  If you install from packages then this page contains details of how to install the special version of the packages designed for Jetson.

    0
    Comment actions Permalink
  • 1229304629

    Thanks,so what if I follow distribution_linux.md to install the SDK,is that Ok ? The installation_jetson.md seems to be a little bit complicate compared with the former one.Or, is it a must for me to follow the latter one?

    0
    Comment actions Permalink
  • MartyG

    A simpler approach is to use pre-made build scripts provided by the JetsonHacks website.  You can install using the same Jetson packages as the ones on the installation_jetson.md page by running their installLibrealsense.sh script or install from source code with the buildLibrealsense.sh script.

    https://github.com/JetsonHacksNano/installLibrealsense

     

    0
    Comment actions Permalink
  • 1229304629

    Sorry to bother again,I encounter a problem about the camera when I use deepstream.

    As you know that,the camera's RGB source is the input stream of the deepstream for object detection.

    However,I find the camera's images are on the auto exposure mode,is there any way to change this since there seems no codes in deepstream to change the camera's parameters.

    0
    Comment actions Permalink
  • MartyG

    DeepStream apparently has an aelock auto-exposure locking mechanism that needs to be disabled.

    https://forums.developer.nvidia.com/t/changing-csi-camera-properties-while-running-deepstream-pipline/174904/3

    0
    Comment actions Permalink

Please sign in to leave a comment.