Offset between RGB and Depth images (d435i)
I am segmenting objects on a conveyor belt that moves 1m/s. I am getting frames from Intel RealSense d435i camera at 30fps, and I aligned color and depth frames as in the examples. Unfortunately I see a little difference of mask position between RGB and Depth masks:
In the next picture I show from left to right RGB mask --- Depth mask --- Merged RGB+DEPTH masks.
Conveyor in this image is moving from left to right, it seems that depth is adding a little trail and images don't match at merged masks. Not sure if Depth mask dimension is little big bigger, but RGB mask is perfectly segmented.
Do you know what could be a root cause of it to continue investiganting?
Hi Fpelegri If the boxes on the conveyor are being viewed by the camera from an overhead position then I would recommend first checking whether shadows around the boxes are creating an incorrectly sized depth outline. Examples of this overhead phenomenon can be found at the links below.
Hi MartyG, Thanks for that info, was valuable. But the issue I found is in moving objects, there is a little offset between Depth and RGB frames. I show you here real package over a conveyor belt, the first frame package is stopped, and the second frame package starts moving, so you can see a displacement between depth mask and the color image.
Is there any camera parameter or configuration to improve sync both Depth and Color frames?
Sync between depth and RGB on the D435 / D435i camera models can be a little more complex than D415 because the RGB sensor is not on the same PCB circuit board as the depth sensor, and instead is mounted separately and attached via a cable.
It may help if you force depth and RGB to have the same FPS instead of one stream being allowed to vary its FPS. This can be done by having auto-exposure enabled and disabling an RGB option called Auto-Exposure Priority. If auto-exposure is enabled and auto-exposure priority is disabled then the RealSense SDK should attempt to enforce a constant FPS for both streams.
Also, if your script uses the wait_for_frames() instruction then the SDK should attempt to find the best timestamp match between depth and RGB frames.
Thanks for the feedback, I needed to know more details. I actually do what is in the pyrealsense2 SDK align-depth2color.py example that uses wait_for_frames(). The issue now is that I have a low exposure time set to reduce blurry noise from movement, so enabling auto-exposure won't be an option here I guess. Also I add a visual_preset (Medium Density) as recommended to have an accurate shape without noise. I also have hole_filling post-processing.
If you find out another suggestion for my solution to improve that will be much appreciated, if not then I will go plan B and do some post-processing.
Please sign in to leave a comment.