i'm new to the Intel RealSense community and relatively new to C++ development, so please don't hate me :)
For Computer Vision and Point Cloud Processing in research, i came to an dead end in terms of Processing Speed and state of the art filters with Python and open3D, waaay to fast o.O. So i have to get into using C++ and libs like PCL.
Right now i am starting to set up a my Computer Vision Environment for Point Cloud Processing and i was wondering what is the nativ / best way to integrate the SDK 2.0 into my own C++ Project?
I already compiled the SDK with the Examples. Everything works fine in the StandAlone manner. Now if i want to use the rs. functionality in my own code, how do i include it?
I can not find any tutorials on that.
I guess just cloning the whole folder, using CMake to link and sett compiler flags like:
will work. But is there more to consider, to make it a even more standard "lib-like" package? it feels like the SDK from github is more a show-off of the of the functionality, and not something i would usually include into a project.
one Example: a lot of code (mainly gui) is dependent on the example.hpp file ...
For now, my main interest i in using the SDK to stream data from the camera, do some processing in the middle and than use the SDK to visualize it.
Maybe you have some better way-to-go recommendation. :D
Please sign in to leave a comment.