View my account

i have large error when trying to conver a pixel point to 3d point with d435



  • MartyG

    Hi Sangngvt  Usually when using depth and color with rs2_deproject_pixel_to_point() and get_distancc(), depth to color alignment should first be performed before using deprojection secondly afterwards.  An example of such a script can be found at the link below.


    Depth to color alignment is especially important with the D435 camera model because its depth and RGB sensors have different field of view (FOV) sizes, with the RGB sensor having a smaller FOV than the depth sensor.  When depth to color alignment is used, the depth FOV is resized to match the size of the color FOV.


    When aligning depth to color, the color intrinsics should be used instead of the depth intrinsics, otherwise the XY values will be inaccurate. If depth intrinsics are used with depth-to-color alignment then the XY coordinates will be accurate at the center of the image but have progressively increasing amounts of measurement error in the coordinates on the outer regions of the image as the center is moved away from.  


    Also bear in mind that different resolutions have different intrinsic values, so please confirm that you have the correct intrinsic values for 640x480 resolution.


    Comment actions Permalink
  • Sangngvt

    Thank you very much for the useful information, I tried it and I found that the z direction error is good, the y direction error is only about 0.6 mm, however for the x direction error, after trying some sample points, I noticed that the error of x gradually increases from 1cm to 3.5cm when moving from near the center of the color image to near the edge of the color image on the left half of the color image,and for the right half of the color image, the x value quite matches reality

    Another problem is that after I use this method, the color image is black in some places and a bit difficult to see. I wonder if there is any way to make the color image look like a normal image?

    Comment actions Permalink
  • Sangngvt

    And this my code after fixing:

    import pyrealsense2 as rs
    import numpy as np
    import math
    import cv2

    pipeline = rs.pipeline()
    config = rs.config()
    config.enable_stream(, 640, 480, rs.format.z16, 30)
    config.enable_stream(, 640, 480, rs.format.bgr8, 30)

    align_to =
    align = rs.align(align_to)

    target_x = 320
    target_y = 240

        while True:
            # This call waits until a new coherent set of frames is available on a device
            frames = pipeline.wait_for_frames()

            # Aligning color frame to depth frame
            aligned_frames = align.process(frames)
            depth_frame = aligned_frames.get_depth_frame()
            aligned_color_frame = aligned_frames.get_color_frame()

            if not depth_frame or not aligned_color_frame:

            color_intrin = aligned_color_frame.profile.as_video_stream_profile().intrinsics

            # Draw two perpendicular lines
            cv2.line(color_image, (target_x, 0), (target_x, 480), (0, 0, 255), 2)  # Vertical line
            cv2.line(color_image, (0, target_y), (640, target_y), (0, 0, 255), 2)  # Horizontal line

            # Display the image with the changes
            cv2.imshow("Color Image", color_image)

            # Blur the image to reduce noise
            blurred_image = cv2.GaussianBlur(color_image, (25, 25), 0)

            # Convert the image to grayscale for better processing performance
            gray_image = cv2.cvtColor(blurred_image, cv2.COLOR_BGR2GRAY)

            # Find circles in the image
            circles = cv2.HoughCircles(

            if circles is not None:
                circles = np.uint16(np.around(circles))

                for circle in circles[0, :]:
                    # Get the coordinates of the circle's center and its radius
                    x, y, radius = circle

                    depth_width = depth_frame.get_width()
                    depth_height = depth_frame.get_height()

                    if 0 <= x < depth_width and 0 <= y < depth_height:
                        depth_at_center = depth_frame.get_distance(x, y)
                        depth_at_center_cm = depth_at_center * 100  # Convert to centimeters

                        # Continue processing depth data
                        print("Coordinates (x, y) are outside the valid range of the depth image.")

                    # Display depth on the color image
                    depth_text = "     {:.2f} cm".format(depth_at_center_cm)
                    cv2.putText(color_image, depth_text, (x, y), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 255), 2)

                    # Depth at the center of the circle
                    depth_value = depth_at_center_cm  # Replace with the depth at the center of the circle

                    # Calculate 3D coordinates
                    dx, dy, dz = rs.rs2_deproject_pixel_to_point(color_intrin, [x, y], depth_value)

                    distance = math.sqrt(((dx) ** 2) + ((dy) ** 2) + ((dz) ** 2))

                    # Print the manually calculated 3D coordinates of the circle center
                    print("Manually calculated 3D coordinates of the circle center (X, Y, Z):", (dx, dy, dz))

                    # Draw the circle and center on the color image
          , (x, y), radius, (0, 255, 0), 2)
          , (x, y), 2, (0, 0, 255), 3)

                    # Display the resulting image
                    cv2.imshow('Detected Circles with Depth', color_image)

            # Wait for the 'q' key to exit the loop
            if cv2.waitKey(1) & 0xFF == ord('q'):

        # Close the connection and clean up

    Comment actions Permalink
  • Sangngvt

    Hey, I'm leaving a comment here in case your notification gets lost.

    Comment actions Permalink
  • MartyG

    Are you capturing only the first frame?  If you are then the RGB might be dark because the auto-exposure needs several frames to settle down after the pipeline is started.  Skipping the first several frames before taking a capture can help to avoid this.

    Comment actions Permalink
  • Sangngvt

    hmm look like this is not my situation, the rgb image is not dark but it has some long black streaks appearing in the rgb image, this affects my hough circle function, so how can i make the rgb back to the normal rgb again ?. The second thing is that when I align, the image size becomes smaller. How can I make it bigger?

    here is the video show some long black streaks:

    as you can see the size of image is smaller and because that black color make it more difficult to find circles

    and i want to back to normal like this pic:



    Comment actions Permalink
  • MartyG

    The black streaks are not on the RGB image.  They are on the depth image, and are appearing because performing depth to color alignment is combining the depth and color images together.


    I note in your script above that you are aligning color to depth instead of the usual 'depth to color' alignment, because you are using the instruction align_to =  Whilst it is not wrong to align color to depth instead of depth to color, using color intrinsics with this instruction could make the aligned image inaccurate. 


    Please try instead to align depth to color using the instruction below.  Then the color intrinsics that you are using will be correct.

    align_to =

    Comment actions Permalink

Please sign in to leave a comment.