View my account

How to find the actual size of the object dimension from the RGB image acquired using Intel Realsense?

Comments

24 comments

  • MartyG

    If you are seeking to measure the complete dimensions of the object (its 3-dimensional volume), the RealSense SDK has an example program in the Python language called box_dimensioner_multicam.

    https://github.com/IntelRealSense/librealsense/tree/master/wrappers/python/examples/box_dimensioner_multicam 

    Though it can make use of multiple cameras, I believe that it should be able to measure with a single camera as it computes its calculations based on the number of cameras that are attached..

    The company LIPS Corp have an interesting alternative approach to the box measuring problem with their recent LIPSMetric Smart Parcel Kiosk product, where the box is measured by an overhead RealSense camera and OpenVINO software.

    https://www.youtube.com/watch?v=zZKTh5DC6Vk 

    https://www.lips-hci.com/lipsmetric 

    0
    Comment actions Permalink
  • Renuka Devi Sm

    Thank You Very much Sir for for quick response.

    Seen all the links noted the information. I am using single D445i camera. and it need to measure length and breath of object in RGB image.

    The application of LIPS Corp is quite interesting .

    Yes, my problem is not of volume, it is length and breadth from edged   image of box object in RGB image.

    With known camera parameters, Please may i know how this is possible.

     

    0
    Comment actions Permalink
  • MartyG

    If you are using image files (not live camera streams), the link below discusses aligning an rgb image file and a depth file and then calculating the measurements.

    https://forums.intel.com/s/question/0D50P0000490TLGSA2/measuring-length-of-object-using-depth-and-rgb-frames 

    If you are capturing from live camera data, the link below is an example program for using color and depth streams to measure between two points.

    https://dev.intelrealsense.com/docs/rs-measure 

    If you can only use RGB, using the OpenCV vision software will likely be the best approach.  I recommend googling for 'opencv measure length rgb'.

    0
    Comment actions Permalink
  • Bluekksailor

    Hi MartyG I really appreciate your comments. They are really helpful!

    I have a question:

    I am currently building a object dimensioning system like the demo shown here (librealsense/samplesetupandoutput.jpg at 53 · IntelRealSense/librealsense (github.com)).May I know how can I increase the accuracy because my first few test runs were quite inaccurate? Increasing the number of cameras? Should I tweak the parameters and if yes, which ones and by how much? Also should i consider third party software like LIPS and if yes, what are some of the procedures I can follow?

     

    Best,

    Jun Kaih

    0
    Comment actions Permalink
  • Bluekksailor

    This issue always persists. It somehow measures the surrounding. How can I optimise this?

    0
    Comment actions Permalink
  • MartyG

    Hi Bluekksailor  The box_dimensioner_multicam Python example program already applies the High Accuracy camera configuration preset.   If you have increased the size of the chessboard from its default 6x9 size because you are measuring a large object then you should edit the values in the script to reflect this board size change.

    https://github.com/IntelRealSense/librealsense/blob/master/wrappers/python/examples/box_dimensioner_multicam/box_dimensioner_multicam_demo.py#L35

    If the object that you are measuring is dark grey / black or it has a reflective surface then that can negatively impact the camera's ability to read depth detail from the surface that is being observed by the camera. 

    If the object is dark (for example, a black office chair) then projecting a strong light-source onto it can help to bring out depth detail.  For reflective objects, coating the object in a fine spray-on powder such as foot powder or baby powder, or using a professional reflection-dampening 3D scanning spray, can help to improve the scan.

    Using multiple RealSense 400 Series cameras with their fields of view overlapped can improve the quality of depth data, as there will be less blind-spots in the observed scene and redundancy in the depth data due to more than one camera observing the same area. 

    If each of the cameras is projecting an IR dot pattern onto the scene from its built-in projector then long-range observation can benefit from overlap of the increased number of dots in the scene, as described in 3.  Use Multiple Projectors of the section of Intel's white-paper document about projectors linked to below.

    https://dev.intelrealsense.com/docs/projectors#section-4-increasing-range

    There are commercial box dimensioning software packages available such as the previously mentioned LIPS one and Intel's own Dimensional Weight Software for RealSense L515.  Dimensioning is achievable with a self-created application if you prefer that option though.

    Creating a solid 3D model and then analyzing it with CAD is another option that has previously been discussed on this forum.

    https://support.intelrealsense.com/hc/en-us/community/posts/360038418393/comments/360010005694

    0
    Comment actions Permalink
  • Bluekksailor

    Hi MartyG I altered the number of squares: from 6 to 7 for the width and 9 to 10 for the height and the program keeps telling me that the chessboard is not detected:

    This is the part of the code:

    if not transformation_result_kabsch[device][0]:
                        print("Place the chessboard on the plane where the object needs to be detected..")
                    else:
                        calibrated_device_count += 1
     
    Thanks!
    0
    Comment actions Permalink
  • MartyG

    There is a past case in which this error has occurred with box_dimensioner_multicam.

    https://github.com/IntelRealSense/librealsense/issues/2529

    Has the square_size changed in your custom-sized chessboard?  By default in the script the square size is set to 0.0253 meters (2.53 cm) for the 6 x 9 square arrangement.  

    0
    Comment actions Permalink
  • Bluekksailor

    Hi MartyG I updated the size of the chessboard (7x10) according to what was printed. I also measured the size of each square (2.6) and updated accordingly. With no changes to size but only to the number of squares, the software was unable to detect any the chessboard.

    I am not getting this error:

    File "box_dimensioner_multicam_demo.py", line 141, in <module> run_demo() File "box_dimensioner_multicam_demo.py", line 75, in run_demo if not transformation_result_kabsch[device][0]: KeyError: '821212060318'

     

    Instead, the script is just telling me that it has not detected the chessboard by printing this statement "Place the chessboard on the plane where the object needs to be detected." over and over again.

     

    Appreciate your help. 

    0
    Comment actions Permalink
  • MartyG

    Does the chessboard remain undetected if you change the position of the camera, such as moving it further away from the chessboard?

    0
    Comment actions Permalink
  • Bluekksailor

    Yes it does. What are some workarounds we could try?

     

    0
    Comment actions Permalink
  • MartyG

    1.  Given that your updated chessboard was measured as having a square size of 2.6, did you set the square size in the script at the meters-value of 0.026

    https://github.com/IntelRealSense/librealsense/blob/master/wrappers/python/examples/box_dimensioner_multicam/box_dimensioner_multicam_demo.py#L37

    2.  Have you confirmed that the chessboard is detectable again if the script is set back to its original values of 6 width, 9 height and 0.0253 square size?

     

     

    0
    Comment actions Permalink
  • Luciano Fernandez

    Hi MartyG

    I am facing the same issues regarding calibration board detection using RealSense depth camera D435 and will apreciate your help.

    We want to detect larger objects than in the example, we are using the [python example](https://github.com/IntelRealSense/librealsense/tree/master/wrappers/python/examples/box_dimensioner_multicam)   with one camera and we increase the size of the calibration board to:

    # Define some constants
    resolution_width = 1280 # pixels
    resolution_height = 720 # pixels
    frame_rate = 15 # fps
    dispose_frames_for_stablisation = 50 # frames

    chessboard_width = 4 # squares
    chessboard_height = 4 # squares
    square_size = 0.1500 # meters

    We have a 10x7 squares size board:


    Using 10x7 config, the calibration board is not detected:

    python3 box_dimensioner_multicam_demo.py
    1 devices have been found
    Place the chessboard on the plane where the object needs to be detected..
    Place the chessboard on the plane where the object needs to be detected..
    Place the chessboard on the plane where the object needs to be detected..

    And if we reduce the board size to 4x4, it is detected but it is not working as expected:

    Do you think that increasing the number of cameras could resolve this? or do we need to increase the distance between the camera and the board. We were testing this python example without changing the paremeters and we have this problems incresing the distance too:

    Thanks

    0
    Comment actions Permalink
  • Luciano Fernandez

    I forgot to say that ones the board is detected, the python example is logging a calibration error:

    RMS error for calibration with device number 938422071534 is : 0.009729377400675792 m
    Calibration completed...
    Place the box in the field of view of the devices...

    0
    Comment actions Permalink
  • MartyG

    Hi Luciano Fernandez  I would typically expect the green bounding box generated around boxes by the box_dimensioner_multicam example to align with the contours of the box instead of being at a 45 degree angle from the box's shape, as shown in the images below.

     

     

     

    Although box_dimensioner_multicam can function with a single camera, all of the images above were generated by a two-camera setup.  I could not provide a guarantee that it would produce the desired results if another camera was added, but if you were planning on obtaining a second camera anyway then it would certainly be worth testing.

    0
    Comment actions Permalink
  • Luciano Fernandez

    Thanks for answering MartyG. Yes, the camera works as expected having a short distance on the target:


    But when we took the board to a meter away, it starts to fail.

    0
    Comment actions Permalink
  • MartyG

    Given the close proximity of the chessboard and box in the earlier successful images, I wonder whether box_dimensioner_multicam was designed with short-range analysis in mind.

    If you are moving the board twice as far away from the camera and the squares are appearing smaller in size as a result at the increased distance, perhaps you could try halving or doubling the square_size value in the script and see if either change improves the detection results.

    0
    Comment actions Permalink
  • Luciano Fernandez

    I'm getting the same results doubling or halving the square_size:

    0
    Comment actions Permalink
  • MartyG

    Given that the image looks good, is it the measured values shown in green text that are failing?

    0
    Comment actions Permalink
  • Somusundram R

    What if i don't want to use the checkerboard and just want to do the box detection on a single color plane (say a white tile)? Is there any example that can give a clue on how to do so? 

    0
    Comment actions Permalink
  • MartyG

    It is possible to use a RealSense camera to track an object of a particular color, like in the project in the link below.

    https://by-the-w3i.github.io/2019/10/06/ColorBlockTracking/

    Alternatively, a plane can be established from 3D points using a plane-fit algorithm.

    https://support.intelrealsense.com/hc/en-us/community/posts/360050894154-plane-detection

    0
    Comment actions Permalink
  • Raivil

    Hey All,

    I'm facing the same issues described here and on other posts.

    Some examples of the issues are:

    • low precision or the board is identified as a box when camera distance is greater or shorter than the "sweet spot" (around 1m).
    • Updating the chessboard to have more row/columns but it seems the Python OpenCV "findChessboardCorners" is not recognizing it very well.
    • Also tried a 9x6 board printed out on a A0 paper, with box size of 100mm (10cm). Not very successful. (used https://github.com/opencv/opencv/blob/master/doc/pattern_tools/gen_pattern.py)

    I'm trying to get the volume of larger objects using two cameras. I'd like to test a setup where the cameras are 2 meters high and apart from each other (so essentially a 2x2m setup)

    Does anyone have had success with it?

     

    0
    Comment actions Permalink
  • Luciano Fernandez

    Yes, We have already tried with two camera pointing from opposite sides.

    It doesn't increase the precision, It may be related to the algorithm.

    0
    Comment actions Permalink
  • Raivil

    Luciano Fernandez

    How did you overcome the issues?

    Are you able to be accurately measure objects at a larger distance (with one or more cameras)?

    Using one camera is also an option, if I can get it to work at a bigger distance to measure large objects/boxes.

    0
    Comment actions Permalink

Please sign in to leave a comment.