View my account

Offset between RGB and Depth images (d435i)

Comments

5 comments

  • MartyG

    Hi Fpelegri  If the boxes on the conveyor are being viewed by the camera from an overhead position then I would recommend first checking whether shadows around the boxes are creating an incorrectly sized depth outline. Examples of this overhead phenomenon can be found at the links below.

    https://github.com/IntelRealSense/librealsense/issues/7021

    https://github.com/IntelRealSense/librealsense/issues/9094#issuecomment-847985952

    0
    Comment actions Permalink
  • Fpelegri

    Hi MartyG, Thanks for that info, was valuable. But the issue I found is in moving objects, there is a little offset between Depth and RGB frames. I show you here real package over a conveyor belt, the first frame package is stopped, and the second frame package starts moving, so you can see a displacement between depth mask and the color image.

    #1

    #2

     

    Is there any camera parameter or configuration to improve sync both Depth and Color frames?

     

    0
    Comment actions Permalink
  • MartyG

    Sync between depth and RGB on the D435 / D435i camera models can be a little more complex than D415 because the RGB sensor is not on the same PCB circuit board as the depth sensor, and instead is mounted separately and attached via a cable.

     

    It may help if you force depth and RGB to have the same FPS instead of one stream being allowed to vary its FPS.  This can be done by having auto-exposure enabled and disabling an RGB option called Auto-Exposure Priority.  If auto-exposure is enabled and auto-exposure priority is disabled then the RealSense SDK should attempt to enforce a constant FPS for both streams.

     

    Also, if your script uses the wait_for_frames() instruction then the SDK should attempt to find the best timestamp match between depth and RGB frames.

    0
    Comment actions Permalink
  • Fpelegri

    Thanks for the feedback, I needed to know more details. I actually do what is in the pyrealsense2 SDK align-depth2color.py example that uses wait_for_frames(). The issue now is that I have a low exposure time set to reduce blurry noise from movement, so enabling auto-exposure won't be an option here I guess. Also I add a visual_preset (Medium Density) as recommended to have an accurate shape without noise. I also have hole_filling post-processing.

    If you find out another suggestion for my solution to improve that will be much appreciated, if not then I will go plan B and do some post-processing.

    Thanks!

    0
    Comment actions Permalink
  • MartyG

    If you are using manual RGB exposure then setting RGB exposure to 78 and FPS to (yes, 6, not 60) should reduce RGB blurring from fast motion on a D435i camera. 

    0
    Comment actions Permalink

Please sign in to leave a comment.