View my account

Outdoor 3D Reconstruction with D435

Comments

16 comments

  • MartyG

    Hi Jomanaashraf8  The RealSense Viewer tool applies a range of depth colorization settings and image-enhancing post processing filters by default.  In a script that you create yourself, these colorization settings and filters are not applied by default and must be deliberately programmed into the script.

     

    You can see which filters the Viewer is applying by expanding open the Post Processing section of the Stereo Module category and look at the filters that have a blue icon (meaning O) beside them.

     

    The depth colorization settings can be found in the Depth Visualization section of Stereo Module.

     

    Intel provide a RealSense post-processing filter tutorial for Python at the link below.

    https://github.com/IntelRealSense/librealsense/blob/jupyter/notebooks/depth_filters.ipynb

     

    Also, the links below are useful Python post-processing references:

    https://github.com/IntelRealSense/librealsense/issues/1672#issuecomment-387438447

    https://github.com/IntelRealSense/librealsense/issues/1672

     

    In regard to the depth colorization settings, the links below provide Python scripting resources for configuring the color scheme and histogram on / off status.

    https://github.com/IntelRealSense/librealsense/issues/7767#issuecomment-726704671

    https://github.com/IntelRealSense/librealsense/issues/7089#issuecomment-672105974

    0
    Comment actions Permalink
  • Jomanaashraf8

    Thanks a lot MartyG for your response. I need to ask if I need my visualization of the point cloud to be colored from the color frame stream as I made in my python code.
    Should I put the filters that you mentioned and then apply the same procedure as in my code?

    0
    Comment actions Permalink
  • MartyG

    Depth colorization does not use the RGB color stream.  Instead, it color shades the depth points according to their distance from the camera.  The typical default color scheme is that depth points nearest to the camera are shaded blue and those that are furthest to the camera are shaded red, with the color palette shifting progressively from blue towards the red range as distance increases.

     

    I note that your script is using align_to and then later using map_to when generating the pointcloud.  The map_to instruction maps RGB onto the depth points and so it is not necessary to also align depth to color with align_to.  Using both alignment methods in the same script can actually make the alignment more inaccurate.  So I would recommend removing most of the alignment code and only keeping pc.map_to.

     

    I also recommend placing and applying the post-processing filters in the script before pc.calculate and pc.map_to

     

    The line below may not do much as the D435's default depth unit scale is 0.001 and so this line will not change the scale by much.  So I recommend commenting it out to see what difference it makes.

    depthsensor.set_option(rs.option.depth_units,0.000755)

    0
    Comment actions Permalink
  • Jomanaashraf8

    Hello,

    I tried what you recommended I put the filters and the apply the pc.map_to and pc.calculate but tit giving me this error.

    My code is here

    import numpy as np # fundamental package for scientific computing

    import matplotlib.pyplot as plt # 2D plotting library producing publication quality figures

    import pyrealsense2 as rs # Intel RealSense cross-platform open-source API

    print("Environment Ready")

    pipe = rs.pipeline()

    config = rs.config()

    config.enable_stream(rs.stream.depth)

    config.enable_stream(rs.stream.color)

    # align_to = rs.stream.color

    # align = rs.align(align_to)

    profile = pipe.start(config)

    device = profile.get_device()

    # depthsensor = device.first_depth_sensor()

    # if depthsensor.supports(rs.option.depth_units):

    # # This value in realsense viewer in micrometers and here is in meters

    # depthsensor.set_option(rs.option.depth_units,0.000755)

    # Skip 5 first frames to give the Auto-Exposure time to adjust

    for x in range(5):

    pipe.wait_for_frames()

    frames = pipe.wait_for_frames()

    depth_frame = frames.get_depth_frame()

    color_frame = frames.get_color_frame()

    colorizer = rs.colorizer()

    colorized_depth = np.asanyarray(colorizer.colorize(depth_frame).get_data())

    plt.imshow(colorized_depth)

    plt.show()

    #Filters applied to depth farmes

    decimation = rs.decimation_filter()

    decimation.set_option(rs.option.filter_magnitude, 4)

    filtered_depth = decimation.process(depth_frame)

    colorized_depth = np.asanyarray(colorizer.colorize(filtered_depth).get_data())

    print("Decimation filter")

    plt.imshow(colorized_depth)

    plt.show()

    depth_to_disparity = rs.disparity_transform(True)

    filtered_depth = depth_to_disparity.process(filtered_depth)

    spatial = rs.spatial_filter()

    spatial.set_option(rs.option.filter_magnitude, 2)

    spatial.set_option(rs.option.filter_smooth_alpha, 0.5)

    spatial.set_option(rs.option.filter_smooth_delta, 20)

    filtered_depth = spatial.process(filtered_depth)

    colorized_depth = np.asanyarray(colorizer.colorize(filtered_depth).get_data())

    print("spatial filter")

    plt.imshow(colorized_depth)

    plt.show()

    temporal = rs.temporal_filter()

    temporal.set_option(rs.option.filter_smooth_alpha,0.4)

    temporal.set_option(rs.option.filter_smooth_delta,20)

    filtered_depth = temporal.process(filtered_depth)

    colorized_depth = np.asanyarray(colorizer.colorize(filtered_depth).get_data())

    print("temporal filter")

    plt.imshow(colorized_depth)

    plt.show()

    disparity_to_depth = rs.disparity_transform(True)

    filtered_depth = depth_to_disparity.process(filtered_depth)

    hole_filling = rs.hole_filling_filter()

    filtered_depth = hole_filling.process(filtered_depth)

    colorized_depth = np.asanyarray(colorizer.colorize(filtered_depth).get_data())

    plt.imshow(colorized_depth)

    plt.show()

    pc = rs.pointcloud()

    points = rs.points()

    points = pc.calculate(filtered_depth)

    pc.map_to(color_frame)

    print("saving...")

    points.export_to_ply("../pcd.ply", color_frame)

    It works when I remove the disparity to depth and the depth to disparity but what I have read that the post processing tool is worked in this sequence so thats why I applied them.
     
    and here is the error:
    points = pc.calculate(filtered_depth)
    RuntimeError: Error occured during execution of the processing block! See the log for more info


    Also, when I save the .ply it gives me green color not the same color of the RGB as I am using the pc.map_to(color_frame)
     
    0
    Comment actions Permalink
  • MartyG

    In these two lines:

     

    pipe.wait_for_frames()

    frames = pipe.wait_for_frames()

     

    I recommend commenting out the line pipe.wait_for_frames() as only frames = pipe.wait_for_frames() should be needed.

     

    When using depth_to_disparity or disparity_to_depth, both cannot be true or false - one must be true and the other false.  For example:

     

    depth_to_disparity = rs.disparity_transform(True)

    disparity_to_depth = rs.disparity_transform(False)

     

    I also recommend moving all of your initial filter definition lines above the pipe.wait_for_frames() line so that they are not within a loop.

    decimation = rs.decimation_filter()

    spatial = rs.spatial_filter()

    temporal = rs.temporal_filter()

    hole_filling = rs.hole_filling_filter()



    frames = pipe.wait_for_frames()
    1
    Comment actions Permalink
  • Jomanaashraf8

    Thanks for your response it is working now but the problem is that the saved .ply is not having the colors from the RGB image it is giving me  .ply file without any colors what should I do?

    0
    Comment actions Permalink
  • MartyG

    Pyrealsense2 historically has problems with exporting color to ply.  The method in a script at the link below is the only method that has been confirmed to work.

    https://github.com/IntelRealSense/librealsense/issues/6194#issuecomment-608371293

    0
    Comment actions Permalink
  • Jomanaashraf8

    Thanks a lot

    0
    Comment actions Permalink
  • Jomanaashraf8

    I have another question, Regarding the filters, I made a post processing filters to the depth frame and stored it it "filtered_depth" then I need to converet this filtered depth to open3d so i convert it to numpy array but it gives me error

    pipe = rs.pipeline()

    config=rs.config()

    config.enable_stream(rs.stream.depth)

    config.enable_stream(rs.stream.color)

    align_to=rs.stream.color

    align=rs.align(align_to)

    profile=pipe.start(config)

    # # Skip 5 first frames to give the Auto-Exposure time to adjust

    forxinrange(5):

    pipe.wait_for_frames()

    #Define filters to depth farmes

    decimation=rs.decimation_filter()

    decimation.set_option(rs.option.filter_magnitude,4)

    depth_to_disparity=rs.disparity_transform(True)

    spatial=rs.spatial_filter()

    spatial.set_option(rs.option.filter_magnitude,2)

    spatial.set_option(rs.option.filter_smooth_alpha,0.5)

    spatial.set_option(rs.option.filter_smooth_delta,20)

    temporal=rs.temporal_filter()

    temporal.set_option(rs.option.filter_smooth_alpha,0.4)

    temporal.set_option(rs.option.filter_smooth_delta,20)

    disparity_to_depth=rs.disparity_transform(False)

    hole_filling=rs.hole_filling_filter()

    vis=o3d.visualization.Visualizer()

    vis.create_window('Point cloud')

    pointcloud=o3d.geometry.PointCloud()

    geom_added=False

    whilenotrospy.is_shutdown():

    frames=pipe.wait_for_frames()

    prof=frames.get_profile()

    depth_frame=frames.get_depth_frame()

    color_frame=frames.get_color_frame()

    #Apply the filters to depth frames

    filtered_depth=decimation.process(depth_frame)

    filtered_depth=depth_to_disparity.process(filtered_depth)

    filtered_depth=spatial.process(filtered_depth)

    filtered_depth=temporal.process(filtered_depth)

    filtered_depth=disparity_to_depth.process(filtered_depth)

    filtered_depth=hole_filling.process(filtered_depth)

    pc=rs.pointcloud()

    points=rs.points()




    depth_image=np.asanyarray(depth_frame.get_data())

    color_image=np.asanyarray(color_frame.get_data())




    pc.map_to(color_frame)

    points=pc.calculate(filtered_depth)

    colorizer=rs.colorizer()

    colorized_depth=cv2.applyColorMap(cv2.convertScaleAbs(depth_image,alpha=0.03),cv2.COLORMAP_JET)




    # Changing relasense point cloud to open3d

    pointcloud=open3d_visualization(depth_image,color_image,geom_added)

    # Convert from open3d to ros message

    rospc=open3d_to_rospc(pointcloud)

    pcd_pub.publish(rospc)

    # Saving the point cloud when "s" is pressed

    cv2.imshow("image",colorized_depth)

    key=cv2.waitKey(1)

    ifkey==ord("s"):

    print("Saving pcd_{}.ply".format(counter))

    points.export_to_ply("/home/jomana/masters_ws/src/Studying/Thesis/robot_version/img/point_cloud/pcd_{}.ply".format(counter),color_frame)

    counter+=1

    What I am asking is if I put the depth_image to depth_frame it will take the depth after applying the filters or should I put it "filtered_depth.get_data()"?

    Here is the error:

    rgbd = o3d.geometry.RGBDImage.create_from_color_and_depth(img_color, img_depth, convert_rgb_to_intensity=False)
    RuntimeError: [Open3D ERROR] [CreateFromColorAndDepth] Unsupported image format.

    0
    Comment actions Permalink
  • MartyG

    Research of this Open3D error suggests that it can occur if depth and color have not been aligned so that they are the same size.

     

    I recommend placing the instructions below after the pipe.start instruction instead of before it, otherwise the depth-color alignment is likely not being carried out.

    align_to=rs.stream.color

    align=rs.align(align_to)

     

    So it should look like this:

     

    profile=pipe.start(config)

    align_to=rs.stream.color

    align=rs.align(align_to)

     

     

    0
    Comment actions Permalink
  • Jomanaashraf8

    But when I put the align method with the pc.map_to() and pc.calculate() the saved point cloud will not be colored and this will give me inaccurate results as you said to me before that I shouldn't use the two methods together.

    0
    Comment actions Permalink
  • MartyG

    Because the alignment lines were not 'commented out', I thought that your intention was to still use both align_to and pc.calculate.  I apologize for misunderstanding.  If you want to keep lines in a script as a reminder but not use them, please place in front of them so that the script ignores them and people reading the script can see that those lines are intended to be ignored.  Thanks very much!

     

    align_to makes automatic adjustments for differences in resolution and field of view size via the RealSense SDK's align processing block.  The pc.calculate method of publishing a point cloud is likely not making these automatic adjustments and so Open3D may be erroring because it sees depth and color as being different sizes.  On a D435 camera, depth will be 848x480 resolution by default and color will be 1280x720 by default.

     

    Does the error still occur if you force depth and color to be the same size in your config instructions?

     

    config.enable_stream(rs.stream.depth, 1280, 720, rs.format.z16, 30)

    config.enable_stream(rs.stream.color, 1280, 720, rs.format.rgb8, 30)
    0
    Comment actions Permalink
  • Jomanaashraf8

    It gives me this error i don't know why?

    profile = pipe.start(config)
    RuntimeError: Couldn't resolve requests

    0
    Comment actions Permalink
  • MartyG

    That should be a valid configuration for a D435 on a USB 3 connection, though this error would occur if the camera was being detected as being on a USB 2 connection.  If that is occurring then changing the resolution to 640x480 should enable the configuration to be accepted on USB2.

     

    config.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30)

    config.enable_stream(rs.stream.color, 640, 480, rs.format.rgb8, 30)
    0
    Comment actions Permalink
  • Jomanaashraf8

    Thanks a lot it works now.

    I have a question regarding saving the point cloud in a numpy array I see that get_vertices() will give me the poistion (x,y,z) but I also need in the same array to have the color of the point cloud "rgb" color.

    Here is the code:

    pc.map_to(color_frame)

    points=pc.calculate(filtered_depth)

    point_arr=np.asanyarray(points.get_vertices())

    point_arr=point_arr.reshape(len(point_arr),1)
    I know that get_texture_coordinate will give me the texture but it is not the real "rgb" 
    How can I find this?
     
     
    0
    Comment actions Permalink
  • MartyG

    If you want the color of an RGB pixel and not its position coordinate then there are few resources that I could find about this, and the found ones were in C++ code rather than Python.

    https://github.com/IntelRealSense/librealsense/issues/3364

     

    https://stackoverflow.com/questions/61921324/pointer-exception-while-getting-rgb-values-from-video-frame-intel-realsense

    0
    Comment actions Permalink

Please sign in to leave a comment.