415 depth sense granularity
I'm projecting onto a surface and using the 415s point cloud to sense objects on it (because point cloud allows depth cutoff and the 2d doesn't - in the unity wrapper anyway)
Here's my setup - 3 video boxes taped to a screen with tripods either side for calibration. the 3 video boxes are 1,2, and 3 widths thick. the one one the furthest left (the white one) is only one box thick. The camera and projector are about 2.4m from the screen out of shot.
when i set the depth cutoff to 2290 - it get this - it sees the two fatter vhs video boxes, but not the other.
when i set depth cutoff to 2300 - to try get the third box i get this - it's picking up so much noise that even if it was picked up, i couldn't separate it from the excess blobbage. If i set it past 2300 then the application freezes.
My final setup will be top-down (projector & camera on ceiling )with people touching the projected-on surface .
So my issue is how can I discern any hands if they are that close and the camera can't pick them up ?
Also why is the camera picking up so much excess data in the bottom-left corner ?
apologies for starting this post again - the last one was too 2d-focussed anyway.
ok, any help would be max-appreciated - I'm 3 weeks from install :O
-Jerry
-
Apologies for the delay in responding. I've been analyzing and considering the images very carefully to try to form an impression in my mind of what's happening.
It looks as though you are pointing the camera / projector towards the video boxes at a diagonal angle instead of straight-ahead. If this is so, I am guessing that you want to capture the 3D detail of the side of the boxes (i.e their depth)?
A consequence of a diagonal angle though is that one side of the scene will be closer to the camera lens than the other side. So if there is shadow on the wall (which may be especially present if projecting in a dark room) then as you increase the range of the depth cut-off, the side of the room nearest the diagonally pointed camera would come into range first whilst the further-away other side of the room would have less detail due to being beyond the range of the depth cut-off.
So at a diagonal angle, as you extend the observable depth range closer to the wall to try to capture the third box, you may end up smothering the box in shadow picked up from the wall. This might not occur if the camera was pointing straight-on towards the boxes, as the shadow would be more balanced between the left and right sides of the scene instead of dominating on one side.
If you are trying to pick up the width of the boxes with a diagonal angle, I don't think that should be necessary in order for the camera to be able to discern the distance from the camera lens of the boxes from their flat front-on covers.
A way to illuminate the scene whilst providing a surface suitable for projection may be to use an "LED wall", a surface lit by an LED light that provides photographers with shadowless backdrops. LED tents are also available, though these may not be suitable for putting on the top of a table!
-
sorry Marty - i didn't explain that properly - the camera & projector are directly in front of the screen (i took the photo from an angle without thinking)
I'm trying to pick up objects as close to the screen as possible - but this is proving impossible because the 415 can't tell much difference between the thinner box stuck to the screen and the screen itself. If i set the depth cutoff any further forward the depth image created becomes a giant black mass.
I've attached a diagram to clarify it a bit.
-
I went back and reviewed the original case to refresh my memory of the details of your setup and the reasons for it.
You are trying to get the camera to ignore the head and shoulders of the user, viewed from a top down perspective, and are setting a depth cutoff to do so. The surface that you will be projecting onto (a table top) will have no background detail behind it, as the camera cannot see through the tablle to what is underneath it.
So actually, we do not need to set a maximum depth. What we need to do, I think, is set a *minimum* depth so that the users' head and shoulders (represented by the tripods in the test image) are ignored but the camera is permitted to depth sense as far as it wants.
There is the risk of bad data noise being introduced beyond a certain distance. In theory, if the camera's field of view is totally filled by the table then the risk should be minimized, since the camera could not read any detail beyond the depth of the tabletop.
However, the problem with no maximum depth in this case seems to be that you will pick up a lot of shadow around the boxes if projecting in a dark room. The nature of a 3D object on a surface is that it will cast shadow on the surrounding table area. Also, shadow cast by nearby entities such as the tripod / head and shoulders would be visible on the table, even if the entities that are casting the shadow are not rendered on the image.
Looking carefully at the image, the vertical black areas at the sides of the image correspond to the shape of the shadows cast on the wall by the tripods.
I wonder if instead of projecting onto the table, it might be easier if the table was a laid-flat flatscreen television (not a computer monitor) showing a video image on it. You could then have the lights on in the room, as the camera only then needs to detect the hands from above, which it can do in a normally lit scene. Giant flatscreen TVs are very affordable these days.
It would be something similar to Microsoft's original Surface Table from 2010.
https://www.youtube.com/watch?v=qh9cOlVFItQ
-
hi Marty - when I remove the max distance from the depth cutoff feature it just freezes Unity. This is the same as when i set the max distance to be beyond the wall.
Also - wouldn't it just give me dark areas where it sensed the table that couldn't be separated from the areas where the boxes were - see pic below for what it displayed when it froze.
I'll keep fudging with it in the meantime as the client is very attached to the circular table projection approach.
-
I had another careful think. I wonder if the dark areas on the table would be reduced if a downward-pointing LED spotlight like the ones on some home ceilings was mounted alongside the projector and camera and directed towards the table. If it is a spotlight rather than a bulb light, I would hope that the light will be focused on the table surface and not create noise disruption on the camera image due to the light-source's close proximity to the camera.
-
Hi Marty (and anyone else) - just two questions here.
(1) I'm unable to reduce the noise created with the depth image once the objects are +1.5m away from the camera - no lighting setup or surface seems to effect it to any degree.
I've read some posts about calibrating the camera for the best results - but they all seem to focus on making 3d models at less than 1m away - Are there a set of settings I should be using when focussing on depth detail between 2-3m ?
(2) I've been pointed by another dev to look at using the IR image generated instead (as it's less noisy than the depth textures) - storing the data when there is nothing on the projected surface, and comparing with when there are objects (hands!) moving over that surface.
The idea was that the IR image would ignore the projector image and only pick up the physical objects - but I've tested this and the IR *does* pick up the projector image. Does this make sense ? or it there some setting on the realsense viewer I'm missing ?
He suggested I but an infrared pass filter and stick that over the lens to reduce effect.
-
You could try some of the noise reduction suggestions for the D415 that I just posted on another case.
https://github.com/IntelRealSense/librealsense/issues/4703#issuecomment-523823310
Point 2 is a bit outside of my technical knowledge-base, but I'm sure there are members of the Intel team on this forum who could provide a better commentary on it than I could. :)
Applying a micro-thin filter film over the sensor, or using your own custom acrylic / PVC 'cover material' in front of the sensors if you are skilled in such construction, are certainly options for changing the properties of light going into the imager.
Please sign in to leave a comment.
Comments
8 comments