How to find the actual size of the object dimension from the RGB image acquired using Intel Realsense?
Having known the camera parameters of D435, how to find the actual size of the object dimensions from the image captured using Intel Realsense camera, assuming that i am able to mark/detect the object by processing the RGB image. Please refer me to the required theory in finding the same.
Hope i do get HELP on this
Regards,
-
If you are seeking to measure the complete dimensions of the object (its 3-dimensional volume), the RealSense SDK has an example program in the Python language called box_dimensioner_multicam.
Though it can make use of multiple cameras, I believe that it should be able to measure with a single camera as it computes its calculations based on the number of cameras that are attached..
The company LIPS Corp have an interesting alternative approach to the box measuring problem with their recent LIPSMetric Smart Parcel Kiosk product, where the box is measured by an overhead RealSense camera and OpenVINO software.
-
Thank You Very much Sir for for quick response.
Seen all the links noted the information. I am using single D445i camera. and it need to measure length and breath of object in RGB image.
The application of LIPS Corp is quite interesting .
Yes, my problem is not of volume, it is length and breadth from edged image of box object in RGB image.
With known camera parameters, Please may i know how this is possible.
-
If you are using image files (not live camera streams), the link below discusses aligning an rgb image file and a depth file and then calculating the measurements.
If you are capturing from live camera data, the link below is an example program for using color and depth streams to measure between two points.
https://dev.intelrealsense.com/docs/rs-measure
If you can only use RGB, using the OpenCV vision software will likely be the best approach. I recommend googling for 'opencv measure length rgb'.
-
Hi MartyG I really appreciate your comments. They are really helpful!
I have a question:
I am currently building a object dimensioning system like the demo shown here (librealsense/samplesetupandoutput.jpg at 53 · IntelRealSense/librealsense (github.com)).May I know how can I increase the accuracy because my first few test runs were quite inaccurate? Increasing the number of cameras? Should I tweak the parameters and if yes, which ones and by how much? Also should i consider third party software like LIPS and if yes, what are some of the procedures I can follow?
Best,
Jun Kaih
-
Hi Bluekksailor The box_dimensioner_multicam Python example program already applies the High Accuracy camera configuration preset. If you have increased the size of the chessboard from its default 6x9 size because you are measuring a large object then you should edit the values in the script to reflect this board size change.
If the object that you are measuring is dark grey / black or it has a reflective surface then that can negatively impact the camera's ability to read depth detail from the surface that is being observed by the camera.
If the object is dark (for example, a black office chair) then projecting a strong light-source onto it can help to bring out depth detail. For reflective objects, coating the object in a fine spray-on powder such as foot powder or baby powder, or using a professional reflection-dampening 3D scanning spray, can help to improve the scan.
Using multiple RealSense 400 Series cameras with their fields of view overlapped can improve the quality of depth data, as there will be less blind-spots in the observed scene and redundancy in the depth data due to more than one camera observing the same area.
If each of the cameras is projecting an IR dot pattern onto the scene from its built-in projector then long-range observation can benefit from overlap of the increased number of dots in the scene, as described in 3. Use Multiple Projectors of the section of Intel's white-paper document about projectors linked to below.
https://dev.intelrealsense.com/docs/projectors#section-4-increasing-range
There are commercial box dimensioning software packages available such as the previously mentioned LIPS one and Intel's own Dimensional Weight Software for RealSense L515. Dimensioning is achievable with a self-created application if you prefer that option though.
Creating a solid 3D model and then analyzing it with CAD is another option that has previously been discussed on this forum.
https://support.intelrealsense.com/hc/en-us/community/posts/360038418393/comments/360010005694
-
Hi MartyG I altered the number of squares: from 6 to 7 for the width and 9 to 10 for the height and the program keeps telling me that the chessboard is not detected:
This is the part of the code:
if not transformation_result_kabsch[device][0]:print("Place the chessboard on the plane where the object needs to be detected..")else:calibrated_device_count += 1Thanks! -
There is a past case in which this error has occurred with box_dimensioner_multicam.
https://github.com/IntelRealSense/librealsense/issues/2529
Has the square_size changed in your custom-sized chessboard? By default in the script the square size is set to 0.0253 meters (2.53 cm) for the 6 x 9 square arrangement.
-
Hi MartyG I updated the size of the chessboard (7x10) according to what was printed. I also measured the size of each square (2.6) and updated accordingly. With no changes to size but only to the number of squares, the software was unable to detect any the chessboard.
I am not getting this error:
File "box_dimensioner_multicam_demo.py", line 141, in <module> run_demo() File "box_dimensioner_multicam_demo.py", line 75, in run_demo if not transformation_result_kabsch[device][0]: KeyError: '821212060318'
Instead, the script is just telling me that it has not detected the chessboard by printing this statement "Place the chessboard on the plane where the object needs to be detected." over and over again.
Appreciate your help.
-
1. Given that your updated chessboard was measured as having a square size of 2.6, did you set the square size in the script at the meters-value of 0.026
2. Have you confirmed that the chessboard is detectable again if the script is set back to its original values of 6 width, 9 height and 0.0253 square size?
-
Hi MartyG,
I am facing the same issues regarding calibration board detection using RealSense depth camera D435 and will apreciate your help.
We want to detect larger objects than in the example, we are using the [python example](https://github.com/IntelRealSense/librealsense/tree/master/wrappers/python/examples/box_dimensioner_multicam) with one camera and we increase the size of the calibration board to:
# Define some constants
resolution_width = 1280 # pixels
resolution_height = 720 # pixels
frame_rate = 15 # fps
dispose_frames_for_stablisation = 50 # frameschessboard_width = 4 # squares
chessboard_height = 4 # squares
square_size = 0.1500 # metersWe have a 10x7 squares size board:
Using 10x7 config, the calibration board is not detected:python3 box_dimensioner_multicam_demo.py
1 devices have been found
Place the chessboard on the plane where the object needs to be detected..
Place the chessboard on the plane where the object needs to be detected..
Place the chessboard on the plane where the object needs to be detected..And if we reduce the board size to 4x4, it is detected but it is not working as expected:
Do you think that increasing the number of cameras could resolve this? or do we need to increase the distance between the camera and the board. We were testing this python example without changing the paremeters and we have this problems incresing the distance too:
Thanks
-
Hi Luciano Fernandez I would typically expect the green bounding box generated around boxes by the box_dimensioner_multicam example to align with the contours of the box instead of being at a 45 degree angle from the box's shape, as shown in the images below.
Although box_dimensioner_multicam can function with a single camera, all of the images above were generated by a two-camera setup. I could not provide a guarantee that it would produce the desired results if another camera was added, but if you were planning on obtaining a second camera anyway then it would certainly be worth testing.
-
Thanks for answering MartyG. Yes, the camera works as expected having a short distance on the target:
But when we took the board to a meter away, it starts to fail. -
Given the close proximity of the chessboard and box in the earlier successful images, I wonder whether box_dimensioner_multicam was designed with short-range analysis in mind.
If you are moving the board twice as far away from the camera and the squares are appearing smaller in size as a result at the increased distance, perhaps you could try halving or doubling the square_size value in the script and see if either change improves the detection results.
-
It is possible to use a RealSense camera to track an object of a particular color, like in the project in the link below.
https://by-the-w3i.github.io/2019/10/06/ColorBlockTracking/
Alternatively, a plane can be established from 3D points using a plane-fit algorithm.
https://support.intelrealsense.com/hc/en-us/community/posts/360050894154-plane-detection
-
Hey All,
I'm facing the same issues described here and on other posts.
Some examples of the issues are:
- low precision or the board is identified as a box when camera distance is greater or shorter than the "sweet spot" (around 1m).
- Updating the chessboard to have more row/columns but it seems the Python OpenCV "findChessboardCorners" is not recognizing it very well.
- Also tried a 9x6 board printed out on a A0 paper, with box size of 100mm (10cm). Not very successful. (used https://github.com/opencv/opencv/blob/master/doc/pattern_tools/gen_pattern.py)
I'm trying to get the volume of larger objects using two cameras. I'd like to test a setup where the cameras are 2 meters high and apart from each other (so essentially a 2x2m setup)
Does anyone have had success with it?
-
How did you overcome the issues?
Are you able to be accurately measure objects at a larger distance (with one or more cameras)?
Using one camera is also an option, if I can get it to work at a bigger distance to measure large objects/boxes.
Please sign in to leave a comment.
Comments
24 comments