View my account

Gets the depth of the pixel

Comments

13 comments

  • Alexandra Ciuriuc

    Hi Carlsonito,

     

    Thank you for your interest in the Intel RealSense cameras.

    You can adjust the settings in order to improve the depth accuracy. You may find more information here:

    https://www.intel.com/content/dam/support/us/en/documents/emerging-technologies/intel-realsense-technology/BKMs_Tuning_RealSense_D4xx_Cam.pdf

    Applying post-processing filters may also help.

     

    Let me know if this helps.

     

    Regards,

    Alexandra

     

     

    0
    Comment actions Permalink
  • Jerry

    Hey Carlsonito - can I ask how you were able to get such a defined depth image ? I'm using the R415 with the Unity wrapper for the sdk and my output looks much more muddy. 

    I would like to be able to pick out objects like your more clearly, and also how were you able to use getDistance ? I have not found that anywhere in the Unity realsense 2.0 sdk.

    any help would be very appreciated !

    -Jerry

    0
    Comment actions Permalink
  • Carlsonito

    Hey Jerry,What you see is not a depth image, it's a binary image. I change my target to white pixels, and then calculate the corresponding depth according to the pixel position of the white area.
    About use getDistance, you can look at this:https://github.com/IntelRealSense/librealsense/wiki/API-How-To#get-video-stream-intrinsics 
    I'm sorry I'm not using Unity, so I don't know much about Unity.


    I hope I can help you
    -carlsonito

    0
    Comment actions Permalink
  • Alexandra Ciuriuc

    Hi Carlsonito,

     

    Did you manage to solve your issue?

    If not, did you take a look at the paper I sent you? 

    https://www.intel.com/content/dam/support/us/en/documents/emerging-technologies/intel-realsense-technology/BKMs_Tuning_RealSense_D4xx_Cam.pdf

    If you want us to reproduce your issue, please send us your code.

     

    Regards,

    Alexandra

    0
    Comment actions Permalink
  • Carlsonito

    Hi,Alexandra

    I'm sorry for replying to you so late. My problem has been solved. Your answer is of great help to me. Thank you for your help

     

    Regards,

    Carlsonito

     

    0
    Comment actions Permalink
  • Sheethal Ng

    Hi,

    I am trying to get depth data using python wrapper, below is code snippet to do the same

    sensor = profile.get_device().first_depth_sensor()
    sensor.set_option(rs.option.emitter_enabled,0)
    scale = sensor.get_depth_scale()
    pipeline = rs.pipeline()
    frames = pipeline.wait_for_frames()depth_frame = frames.get_depth_frame()
    depth_data = depth_frame.get_data()
    depth_data = np.asanyarray(depth_data)

     

    I multiplied all the pixels in depth_data with depth_scale, assuming that is the distance between each pixel to the camera.

    The results after multiplying seems incorrect, because some values are around 60, ideally this cant be in meters.

     

    Can u please help me with this??

     

     

     

    0
    Comment actions Permalink
  • MartyG

    Hi Sheethal Ng  It is possible to get distance readings of up to 65 meters when calculating distance as the 16-bit depth value multiplied by the default 0.001 (meters) depth scale.  It is a theoretical result though, called the expressive depth range: In practice, the depth is limited by factors such as physics and how far the depth sensor components in the camera can see (up to 10 meters on the 400 Series camera models).

    I hope that the link below will be a useful reference.

    https://github.com/IntelRealSense/librealsense/issues/6702 

     

    If you would prefer to get distance readings in meters for objects that are actually within the depth sensing range, Intel has a tutorial for using Python to calculate an observed object's distance by aligning the depth frame with the RGB color frame using data from a pre-recorded bag file.

    https://github.com/IntelRealSense/librealsense/blob/jupyter/notebooks/distance_to_object.ipynb 

    0
    Comment actions Permalink
  • Sheethal Ng

    Suppose if i dump the depth data using the above approach for one frame, how do i verify the dumped data is correct

    0
    Comment actions Permalink
  • MartyG

    Apologies for the delay in responding further. I could find very little useful information on this question of how to validate array values, unfortunately.  If you plan to dump array values to a csv, the link below may provide some guidance about the maths of csv checks though.

    https://stackoverflow.com/questions/15406337/efficient-way-to-validate-array-values 

     

     

     

    0
    Comment actions Permalink
  • Sheethal Ng

    My question is after dumping the data how do i know the distance of each pixel is correct.

    0
    Comment actions Permalink
  • MartyG

    The question is outside of my Python programming knowledge.  The link below describes the method that a pyrealsense2 user applied to check pixel values though.

    https://github.com/IntelRealSense/librealsense/issues/5878#issuecomment-588267233 

     

    If the above link does not help to answer your question, I recommend creating a case at the RealSense GitHub by visiting the link below and clicking on the New Issue button.  You should be able to find some expert pyrealsense2 programmers there.

    https://github.com/IntelRealSense/librealsense/issues 

    0
    Comment actions Permalink
  • Sheethal Ng

     

    The above image is an IR image captured with following configuration

    1. Resolution (1280x720)

    2. FPS 6

    3. IR emitter off

    4. Format y8

     

    Below is the result in format (x,y,distance)

    4 273 11.035000524134375
    5 273 10.918000518577173
    6 273 10.918000518577173
    7 273 10.805000513209961
    8 273 10.477000497630797
    9 273 10.270000487798825
    10 273 9.97400047373958
    11 273 9.694000460440293
    12 273 9.516000451985747
    13 273 9.430000447900966
    14 273 9.261000439873897
    15 273 9.09900043217931
    16 273 9.09900043217931
    17 273 9.02000042842701
    18 273 9.09900043217931

    The camera was pointed to a flat wall from distance 30 cm (0.3 meters), then why i am getting longer distances

    0
    Comment actions Permalink
  • MartyG

    The camera will have difficulty reading depth detail from a flat, textureless / low texture surface such as a wall.  If the IR Emitter is enabled then the semi-random dot pattern that is projected onto the wall should allow the camera to more easily analyse the wall for depth detail by using the dots as a texture source

    0
    Comment actions Permalink

Please sign in to leave a comment.