View my account

d455 Gets the point cloud

Comments

11 comments

  • MartyG

    Hello, could you provide an RGB color image of the scene please so that I can check it for possible elements that may be confusing the depth sensing?

    0
    Comment actions Permalink
  • 1095425988

    Okay, here's the rgb map with depth map that I saved via realsense.viewer.

    0
    Comment actions Permalink
  • MartyG

    Thanks very much for the RGB image.  It looks as though that image and the 2D mode's depth image are okay but the problem is occurring in 3D mode when depth and RGB are mapped together to produce a pointcloud.  It appears that the curtains behind the screen are being drawn in front of the screen.

     

    Does the problem still occur if you disable the Viewer's GLSL settings using instructions at the link below, please?

    https://github.com/IntelRealSense/librealsense/issues/8110#issuecomment-754705023

     

     

    0
    Comment actions Permalink
  • 1095425988

    I followed the instructions in the link to disable GLSL, but the problem persists.

    And the code I'm using is the open3d teaching case, the camera matrix uses a csv file on save.

    color_raw = o3d.io.read_image('./test_Color.png')
    depth_raw = o3d.io.read_image('./test_Depth.png')
    rgbd_image = o3d.geometry.RGBDImage.create_from_color_and_depth(
        color_raw, depth_raw)
    pcd = o3d.geometry.PointCloud.create_from_rgbd_image(
                rgbd_image, 
                o3d.camera.PinholeCameraIntrinsic(
                    width=848, height=480, fx=386.578705, fy=386.578705, cx=318.531158, cy=244.414246))
    pcd.transform([[1, 0, 0, 0], [0, -1, 0, 0], [0, 0, -1, 0], [0, 0, 0, 1]])
    o3d.visualization.draw_geometries([pcd])
    0
    Comment actions Permalink
  • MartyG

    So the incorrect point cloud at the top of this case is not from the RealSense Viewer but instead created by exporting depth and color PNG images from the Viewer and importing them into Open3D to generate a pointcloud?

    0
    Comment actions Permalink
  • 1095425988

    yes

    0
    Comment actions Permalink
  • MartyG

    Are the fx, fy, cx and cy values in the script taken from a csv file exported from the Viewer using the Snapshot button on the depth stream?  The inside of the Viewer's csv file looks like the image below (the values are unique for each individual RealSense camera).

     

     

    It may be helpful to look at a RealSense Open3D pointcloud script by another RealSense user at the link below that is also generating a pointcloud from depth and color PNGs.

    https://github.com/IntelRealSense/librealsense/issues/11960#issuecomment-1763474993

    0
    Comment actions Permalink
  • 1095425988

    Yes, my parameters are all taken from the viewer exported csv file and I did notice the difference in the camera.
    At the moment, I'm wondering if there's something wrong with the way I'm saving the images from realsense.viewer? Because the depth maps and rgb maps I get from pyrealsense2 are perfectly fine to get the point cloud, while the ones I save from realsense.viewer still have the same problem, and they all use the same camera matrix, the same 640x480 camera and the same code.

    The way I get the depth map and RGB map from realsense.viewer is to pause the screen in 3D mode and then go back to 2D and click save snapshot on the RGB window and depth window.
    I've pretty much solved my problem, it was a minor query I had, thank you very much for your help.

    0
    Comment actions Permalink
  • MartyG

    You are very welcome!

     

    The way that you are saving the images in the RealSense Viewer is fine.  The Viewer applies a range of post-processing filters and depth colorization settings by default, whilst a depth image generated by a Python script will not have these settings unless they have been deliberately programmed into the script.  So there can be noticeable differences between a Viewer image and a Python one unless filters and colorization are incorporated into the script to bring the image closer to the Viewer's.

    0
    Comment actions Permalink
  • 1095425988

    Thanks, I see, so are the depth maps and rgb maps in the bag files I recorded with the sdk also this kind of subsequently processed or are they raw?

    And,how should sdk be modified to get the raw depth map?

    0
    Comment actions Permalink
  • MartyG

    The RealSense SDK's bag files store raw data and they do not store aligned data, only the individual streams.

     

    As a bag file does not store filtered data, a depth stream stored in it will be the raw depth map.

    0
    Comment actions Permalink

Please sign in to leave a comment.