View my account

Background not removed as seen in sdk

Comments

29 comments

  • MartyG

    Hi Vyas K  Processes such as post-processing and alignment are not stored in a bag file recording.  A bag stores uncompressed and unfiltered camera data that can then have changes made to the data in real-time after the bag is loaded into memory.

    When using the RealSense Viewer's 3D point cloud mode, there is the option to export a ply file directly instead of using rs-convert to convert a bag file to ply.

    0
    Comment actions Permalink
  • Vyas K

    But what i am trying to get is all the frames from bag file. as I am getting 15fps, for a 2 sec bag file I have 30 frames. when I use viewer , I cannot store all 30 frames. that is very difficult for me to do it every second.

    0
    Comment actions Permalink
  • Vyas K

    So I need to get in ply frames exactly what I see in the sdk viewer for all the frames as per fps in a bag. for a 1min 40 sec bag file, I need 1500 frames. i can only get those using rs-convert however that includes all the background and not the part that I see in sdk. is there a way I can work around this?

     

    0
    Comment actions Permalink
  • MartyG

    I researched your question carefully.  The best solution that I could find for eliminating the background from a bag recording was to use the Threshold Filter in the Post-Processing section of the Viewer's side-panel options to limit the maximum depth sensing range, and then record a bag.  The excluded background on the live image was then also excluded in the bag.

    0
    Comment actions Permalink
  • Vyas K

    yes i did that. the bag does not have any background when I see it in viewer. the problem comes when I convert the bag to frames (ply). its then that the background appears in the frames which I extract using rs-convert because I need all the frames as per fps. 

    0
    Comment actions Permalink
  • MartyG

    I would speculate that the bag file may be applying the recorded Threshold filter setting value when loaded into the Viewer but the stored raw data still represents the full unfiltered distance (because the bag is unfiltered data) and so appears with the background when exported as a ply.

    Could you provide detail please about why your project requires ply files?  If having them in ply format were not vital then you could retain the excluded background by capturing the stream as an ordinary video file such as an MP4.

    0
    Comment actions Permalink
  • Vyas K

    Ok So i am trying to recreate a rigid 3d body from bag file-> frames. so I capture a 350 degree view of the body and then 3d resigter individual frames to re-construct whole body. so that is why I require plys. is there a way I could show you what I am talking about?

    0
    Comment actions Permalink
  • MartyG

    It is okay, I understand what you are describing.  Thank you very much.  :)

    If you cannot eliminate the background data when scanning the body, would it be practical to hide the real background, perhaps by putting white sheets around the capture area like a photography tent?

    0
    Comment actions Permalink
  • Vyas K

    well we did that as well. however that wont eliminate the background. it just whites it. i will show you what we did in a photo attached. this still presents the white background that I have to eliminate . i have attached both 3d and 2d photos here for your reference after I tried putting white sheet. i have also attached what I see when I am recording bacg file in real sense.

    0
    Comment actions Permalink
  • MartyG

    If you are aiming to create a solid 3D model then LIPScan 3D may meet your needs as it is compatible with 400 Series cameras.

    https://m.youtube.com/watch?v=N14Pi6z-MkE 

    If it must be real 3D point cloud data though, it may be worth trying a black sheet instead of white.  Solid black absorbs light, so the camera cannot read depth from such surfaces.

    0
    Comment actions Permalink
  • Vyas K

    I can try the software and the black sheet but a part of the project is to create the registration algorithm which I have created on my own in python. once I have removed the background, the registration works very well with my own python algo. however the initial problem of removing everything except the subject is what is taking up more time

     

    0
    Comment actions Permalink
  • MartyG

    You could use OpenCV with Python and remove every color but the skin color of the dummy.  Google opencv python color removal for examples, such as this one:

    https://medium.com/better-programming/automating-white-or-any-other-color-background-removal-with-python-and-opencv-4be4addb6c99 

    The problem with that method is that you would probably then need a skin color detection algorithm if it was going to be used in the real world for subjects other than this particular dummy.

    0
    Comment actions Permalink
  • Vyas K

    That wont work because the dummy will have clothes on him and that would fail. isn't there anyway I could work with in the intel real sense sdk?

    the lipscan 3d is not even starting in my windows pc

    0
    Comment actions Permalink
  • MartyG

    For advanced point cloud and mesh functions, the RealSense SDK can be interfaced with specialist mesh manipulation libraries such as Point Cloud Library (PCL) so that their functions can be accessed from RealSense SDK applications.

    https://github.com/IntelRealSense/librealsense/tree/master/wrappers/pcl 

    0
    Comment actions Permalink
  • MartyG

    As you are using Python, another excellent option for point-cloud and mesh processing that can be used with RealSense 400 Series cameras is pyntcloud

    https://github.com/daavoo/pyntcloud 

    0
    Comment actions Permalink
  • Vyas K

    I will try working with black sheets. challenge is finding that on my campus. i am also trying to work lipscan but that does seem to be working in its windows app. my intentions are to get just 3d subjects without any background so it makes it easy for my python algo to stitch it. I hvae previously tried using pyntcloud and pcl but they have their own limitations working with series of ply files together. thank you for your assistance. let me know if you could help me with working out with real sense sdk to solve it. what I tried is instead of recording bag file, I just tried taking single ply files directly from real sense sdk and the plys i took had no background. the mess happens when I record bag and then convert into frames. but I do not know how can I overcome that. 

    0
    Comment actions Permalink
  • MartyG

    A way to get a 360 degree point cloud with a single camera if you can only take static snapshots and not record the data continuously is to take several ply captures (front, back, left side, right side) and stitch the separate clouds together into a single merged cloud.  This can be done with a maths process called an affine transform.  Basically, just rotate and move the point clouds, in 3D space, and then once you've done that, you append the point clouds together and just have one large point cloud.

    There was a case where a process of performing such a point cloud merge with more than one camera in real-time with RealSense in the Unity engine was discussed in detail.

    https://github.com/IntelRealSense/librealsense/issues/6147 

     

    Unity was also used for post-processing when Intel demonstrated at the Sundance festival in January 2018 how several D435 cameras could capture humans by overlapping the cameras in a 180 degree field of view and then send the data from each camera to a dedicated computer for final processing.

    https://www.intelrealsense.com/intel-realsense-volumetric-capture/ 

    8 cameras can capture a 360 degree field of view, though it has been done with 6.  The more cameras that are used, the less blind spots there are in the data.  

     

    Aside from stitching together several ply files or using multiple cameras, a third method is to walk around the observed object with a single camera and continuously build up live data until you are satisfied with the results and stop recording.  

    https://youtube.com/watch?feature=youtu.be&v=41Yu2H9_z3w  

    0
    Comment actions Permalink
  • Vyas K

    You are correct with all that you wrote. I am a research student and as a part of my project my purpose is actually to work with just one cam, and try to solve the problem of stitching multiple point clouds as cheaper i can (with just one D435). I have successfully stitched it all by taking 16 point clouds of 360 degrees around the subject. i am putting the subject to a turntable and recording the video of the whole rotation and then I convert into frames, take every alternate frames (as my laptop cannot take so much processing) and implement my algorithm in it. So the problem I am facing is:

    1) the depth between hand and body when my dummy stands is not visible. all I get is flat surface when indeed I should be getting is a hole or different depth.

    2) the background which is not visible in view comes out suddenly when I covert bag to frames

    3) I have to use meshlab to remove background for each frame which increases my time

    4) there is some reflection on surface so my stitching is not accurate.

    5) the lidar video you provided, what software is that ? i know the camera is not d400 series but can I use that software to do a rigid body instead of a scene and if yes what software is that? and if can I use it with my D435?

     

    0
    Comment actions Permalink
  • MartyG

    1.  Using depth to color alignment can help in situations where the camera is confused about near and far depth, as it helps the camera to differentiate between the foreground and background.

    2.  Discussed earlier.

    3.  Aside from minimum and maximum depth range, it is also possible to crop x and y dimensions by defining a bounding box.   This allows all depth data outside of the box to be excluded from the image.

    4.  You can significantly reduce the negative effects of glare from reflections by adding a physical filter called a linear polarizer over the camera lenses.  Section 4.4 of Intel's white-paper document about optical filters provides a lot of detail about this.

    https://dev.intelrealsense.com/docs/optical-filters-for-intel-realsense-depth-cameras-d400#section-4-the-use-of-optical-filters 

    Intel's paper on depth map improvements for drones also provides good advice about enhancing results.

    https://dev.intelrealsense.com/docs/depth-map-improvements-for-stereo-based-depth-cameras-on-drones 

    5.  The walk-around scanning software is DotProduct Dot3D Pro.  Though it was being demonstrated as working on the new L515 model in that particular video, it also works with the 400 Series cameras.   There is also a non-pro version of the software called Dot3D Scan.  

    The feature list under the Scan and Pro columns in the link below should provide guidance about whether it can accomplish what you have in mind.

    https://www.dotproduct3d.com/dot3dscan.html 

    0
    Comment actions Permalink
  • Vyas K

    1)How can I implement depth to color alignment using sdk?

    2)How can I implement bounding box? shouldn't bounding box also cover z direction?

    3) How do I implement linear polarizer using sdk?

    0
    Comment actions Permalink
  • MartyG

    1.  Intel have a Python tutorial called distance_to_object in the link below for performing depth to color alignment.

    https://github.com/IntelRealSense/librealsense/blob/jupyter/notebooks/distance_to_object.ipynb 

    2.  Bounding boxes can be both 2D (xy) and 3D (xyz).  Creating a bounding box that also excludes data outside of the box is not straightforward though.  The link below outlines a method that a RealSense user applied for defining an xyz box.

    https://github.com/IntelRealSense/librealsense/issues/2016#issuecomment-403804157 

    3.  Linear polarizers are a physical filter, like putting a film of transparent plastic over the top of the camera lenses.  So there is no SDK programming involved, as you just put the filter over the lenses and the effect that the filter provides occurs automatically.  

    0
    Comment actions Permalink
  • Vyas K

    Hey I am trying the dynamic calib tool to get my calibration file so I can use the dot3d pro for scanning. I am not able to get throughcalibration. says error 9998. i saw one of the previous posts and as you have suggested in that I did upgrade my gold standards using :

    Intel.Realsense.CustomRW -g

    but it still gives me same error

    0
    Comment actions Permalink
  • MartyG

    There was a recent case of this error that was solved by uninstalling the RealSense UWP driver 6.11.160.21 (which has problems at the time of writing this).  So please uninstall that driver if you have installed it.  It is not required to use RealSense on Windows.

    https://github.com/IntelRealSense/librealsense/issues/7496 

    0
    Comment actions Permalink
  • Vyas K

    yes i saw this post and no I do not have that driver.

    0
    Comment actions Permalink
  • MartyG

    Do the instructions for Dot3D Pro say that you need a calibration file, please?

    0
    Comment actions Permalink
  • Vyas K

    no it does not say but i have followed everything they said, it still says camera not connected

     

    0
    Comment actions Permalink
  • MartyG

    Please check out these pages for RealSense connection problems with Dot3D Scan / Pro

    https://desk.zoho.com/portal/dotproduct/en/kb/articles/rsconnectionissues 

     

    https://desk.zoho.com/portal/dotproduct/en/kb/articles/realsenseconnections 

    0
    Comment actions Permalink
  • Vyas K

    Hey MartyG I figured out the problem with both Dot3D and Lip3D. for your reference, you need to connect the D435 with a high speed USBc- 3.0 or higher USB A and that should solve it for both the software. Thank you for all your help. you are awesome. Can I know your full name? I would love to acknowledge you in my thesis!

     

    0
    Comment actions Permalink
  • MartyG

    Thank you very much - great to hear that you succeeded.  There is no need to acknowledge me but if you wish to do so, I am Marty Grover

    0
    Comment actions Permalink

Please sign in to leave a comment.