View my account

RealSense D405 Depth Map Fidelity

Comments

6 comments

  • Kvuong

    For a 1:1 comparison, here is the head in the same position and sensed through my code in the API.

     

    And here is my code:

    import pyrealsense2 as rs

    import cv2

    import numpy as np

    import library as lib

    import matplotlib as plt

    import os, psutil

    import time

    #--------------------------------------------------------------------------------------------- FILE PATHS --------------------------------------------------#

    output_file = '405test.txt' #your txt filename

    if os.path.exists(output_file):

        with open(output_file, 'w') as f:

            f.write('')

        f.close()  

    calibration_file = 'color_calibration\color_calibration_values.txt'

    #--------------------------------------------------------------------------------------------- DEFINING SETUP PARAMETERS --------------------------------------------------#

    color_res= [1280, 720] ; depth_res= [1280, 720] ; fps=30

    pipe = rs.pipeline() ; cfg = rs.config()

    pc = rs.pointcloud()

    df = rs.decimation_filter()

    colorizer = rs.colorizer()

    cfg.enable_stream(rs.stream.color, color_res[0],color_res[1], rs.format.bgr8, fps)

    cfg.enable_stream(rs.stream.depth, depth_res[0], depth_res[1], rs.format.z16, fps)

    color_path = '405test.avi'

    depth_path = '405test.avi'

    colorwriter = cv2.VideoWriter(color_path, cv2.VideoWriter_fourcc(*'XVID'), fps, (color_res[0],color_res[1]), 1)

    depthwriter = cv2.VideoWriter(depth_path, cv2.VideoWriter_fourcc(*'XVID'), fps, (depth_res[0],depth_res[1]), 1)

    profile = pipe.start(cfg)

    depth_sensor = profile.get_device().first_depth_sensor()

    #Controlling Depth Sensor Options

    if depth_sensor.supports(rs.option.depth_units):

        depth_sensor.set_option(rs.option.depth_units,0.0001)

    # depth_sensor.set_option(rs.option.laser_power, 360)

    align_to = rs.stream.color

    align = rs.align(align_to)

    #Initializing marker objects

    marker1 = lib.marker()

    marker2 = lib.marker()

    counter = 0

    start_time = time.time()

    while True:

        #Incrementing Counter

        counter += 1

        print('counter: ' + str(counter))

        #Aligning depth and color frames

        frame = pipe.wait_for_frames()

        aligned_frames = align.process(frame)

        aligned_depth_frame = aligned_frames.get_depth_frame()

        aligned_color_frame = aligned_frames.get_color_frame()

        if not aligned_depth_frame or not aligned_color_frame:

            continue

        #Defining depth_image

        depth_image = np.asanyarray(aligned_depth_frame.get_data())

        color_image = np.asanyarray(aligned_color_frame.get_data())

        depth_cm = np.asanyarray(colorizer.colorize(aligned_depth_frame).get_data())

        #mapping point cloud to color frame and calculating the d

        # epth frame

        pts = pc.calculate(aligned_depth_frame)

        pc.map_to(aligned_color_frame)

        track = []

        track,detected = lib.detect_color_marker(color_image,calibration_file,visualize=True) #Set number to manipulate exposure

        print(detected)

        if detected==True:

            vtx = pts.get_vertices()

            vertices = np.asanyarray(vtx).view(np.float32).reshape(-1,3)              #Reshaping for xyz

            length1 = lib.pass_coordinate(track[0][0], track[0][1], depth_res[0])     #Retrieving pixel number to pass to depth matrix

            position1 = vertices[length1]  

            print(position1)                                           #Retrieving XYZ position1

            marker1.filter(position1)

            current_time = time.time()

            position1 = np.append(position1, format(current_time - start_time,'.6f'))

           

            length2 = lib.pass_coordinate(track[1][0], track[1][1], depth_res[0])     #Retrieving pixel number to pass to depth matrix

            position2 = vertices[length2]                                             #Retrieving XYZ position1

            marker2.filter(position2)

            current_time = time.time()

            position2 = np.append(position2, format(current_time - start_time,'.6f'))

            if marker1.state is True and marker2.state is True:

                with open(output_file,"a") as f:

                    f.write('A: ' + str(position1) + '    ' + 'B: ' + str(position2) + '\n')

                f.close()

        else:

            print('not detected')

        #writing out to video

        colorwriter.write(color_image)

        depthwriter.write(depth_cm)

        cv2.circle(color_image,(int(color_res[0]/2),int(color_res[1]/2)), 5, (0,0,255), -1)

        cv2.circle(depth_cm,(int(depth_res[0]/2),int(depth_res[1]/2)), 5, (0,0,255), -1)

       

        ##Show stream

        cv2.imshow('rgb', color_image)

        cv2.imshow('depth', depth_cm)

        key = cv2.waitKey(1)    # Wait Esc key to end program

        if key == 27:

            break

    colorwriter.release()

    depthwriter.release()

    pipe.stop()
    0
    Comment actions Permalink
  • MartyG

    Hi Kvuong  The RealSense Viewer tool applies a range of image-improving post processing filters that are enabled by default in the Post Processing section of the Viewer's options side-panel.  A script, like the one that you wrote, has no filters enabled by default and they must be deliberately programmed into the script in order to obtain results that are closer to the depth image that is output by the Viewer.

     

    The link below has a simple Python example of using alignment with a Decimation post-processing filter.

    https://github.com/IntelRealSense/librealsense/issues/2356#issuecomment-465766323

     

    The D405 camera's ideal depth sensing range is between 7 cm (70 mm) and 50 cm (500 mm).  So 175 mm (17.5 cm) is well within the D405's ideal range for a good depth image.

    0
    Comment actions Permalink
  • Kvuong

    Hi Marty, thank you for the response. I already use a decimation filter, but i didn't input the "decimated = df.process(frame).as_frameset()" line. Can you explain what this does? Are there any other filters to get a better depth map than this?

     

    Thanks,

    Kenneth

    0
    Comment actions Permalink
  • MartyG

    If a filter instruction contains the word .process then it is a command to activate the filter and apply it to the frames.

     

    After the decimation filter, the other filters that I would recommend enabling are the Spatial and Temporal filters.

    0
    Comment actions Permalink
  • Kvuong

    Can you provide an example implementation for the three filters please?

    Thanks,

    Kenneth

    0
    Comment actions Permalink
  • MartyG

    The link below has Python code for setting up decimation, spatial and temporal filters.

    https://github.com/IntelRealSense/librealsense/issues/10078#issuecomment-997292916

    0
    Comment actions Permalink

Please sign in to leave a comment.