intel Realsense for unity - how to control a game with the given wrapper
how can I control a game using realsense instead of using a keyboard , still confused about how to do that , I see in the scenes which are inside the sdk for unity , that you can turn the images that the camera sees into depth and color textures , but how can that be used to control a game instead of using arrows in the keyboard for example , knowing that you don't have built in support for hand tracking and gestures (that what I understand correct me if I'm wrong) , please give ideas for using the sdk without using any external packages unless they are completly free to use
-
Hi Saadisaadi1 The old 2016 version of the RealSense compatibility plugin for Unity at the link below was best suited for integration of RealSense into games using face, hand and gesture control. It works with the now retired RealSense SR300 camera model but is not compatible with modern RealSense cameras such as the 400 Series, unfortunately and was not designed for recent Unity versions.
https://github.com/IntelRealSense/librealsense/issues/10114#issuecomment-1002456887
The current version of the Unity compatibility wrapper of the RealSense SDK does not have similar control-input ability.
In regard to external free options, a product called Nuitrack 3D that is compatible with modern RealSense cameras can track movement of body parts and has Unity integration. It has an unlimited free trial which is limited to a 3 minute session.
https://github.com/3DiVi/nuitrack-sdk/blob/master/doc/Unity_Basic.md
-
thank you MartyG , but what can I do with the unity wrapper right now , like how can I use it to control a game, what the built in functionality that you have right now?
-
Ok MartyG that would prevent me from integrating it and using it with my already made games.
So I want to ask about the sdk that work with opencv , what can I do with that?, I heared that I can use opencv with mediapipe to track hands and know the current gestures , can I use the realsense also to know the distance of the hand from the camera? how can I use it with opencv , and what the functionalities that I can get so I can use this to control a game?
This is so important to know because that would dictate if I'm going to learn the API of both opencv mediapipe and your sdk for python and opencv
-
The Unity Asset Store has an OpenCV interface, though it is not inexpensive.
https://assetstore.unity.com/packages/tools/integration/opencv-for-unity-21088
Alternatively, as Unity uses C# code (and therefore so does the RealSense Unity wrapper) it is possible to interface with the C# wrapper of the RealSense SDK to access its functions from Unity.
-
MartyG In my last comment I was not talking about unity , I was talking about opencv and mediapipe in python , I would make games instead using pygame , can you please give me what functionality can I get ? I mean can I get the distance from the camera of each pixel in the image , like using this line :
imageRGB = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
this line gives an image for the rgb colors , does your sdk have functionality , that returns similar thing but distance texture , so I can for example from the code on the top after processing it with mediapipe get the x and y of a finger , and from your image get the z using x and y , so I know where is the finger in 3d and that way I can do some controling based on the depth of the hand and distance from camera
-
The link below has a script that uses the SDK's Python language (pyrealsense2) and mediapipe to get the distance using X and Y.
-
thank you MartyG I will receive the camera in couple weeks , I will test the code and go back to you if there is something else to ask , for now thank you very much
-
HI MartyG , I have got back now with a realsense camera , I use the camera in python , use mediapipe to check where the hand is hten take the depth and send it to the game in unity using sockets , but also I want to use the same camera in unity to show the player in the game using the first unity example that you have (sending the whole frame from python is a problem because it leads to low fps), but I have a problme that I can't use the camera in two different apps in the same time (python + unity) , is there a solution for that ?
-
Hi Saadisaadi1 The RealSense SDK operates on rules called the Multi-Streaming Model that dictate that two applications cannot simultaneously access the same stream on the same camera. So if Unity started the depth stream first then the Python application, if it is launched secondly, cannot also use the depth stream.
The reverse is true too. If the Python application is launched first and uses the depth stream then Unity cannot use the depth stream.
You can though access a different stream on the same camera in two different applications. So for example, you could display the RGB image in Unity if the Python application was not using RGB.
-
Hi MartyG , I wanted to try to make a game with only unity so I wanted to anlyze the texture of the rawimage that your example scene : texturedepthandtexturecolor put the frame data on , but the problem is that when I try to go over the texture I always get that the texture is white color , it seems from the depth shader that you have put the depth data into the red channel , but I'm having problem that all the pixels of the texture always have value of 1 to the red channel , do you have any example code that analyzes the data of the texture for example for getting max or min depth , I need an example for regular c# unity code , not shader because I want to use the max or min for controling the game not just only for rendering the image with colormap (like you do with shader)
-
The best way to achieve your goal may be to apply a post-processing filter in the RealSense Unity wrapper, such as the Threshold filter that allows you to define a minimum and maximum distance for rendering depth. The link below provides advice about implementing post-processing filtering in the Unity wrapper.
Please sign in to leave a comment.
Comments
13 comments