At present, the D435i camera has been directly connected to the server, and object information is recognized through the YOLO algorithm.
But I plan to collect the depth and color information of the realsense D435i camera through the Raspberry Pi, send it to the server through wireless or 5G network, and then use the YOLO algorithm for recognition.
Now cameras can be deployed on Raspberry Pi to collect information, but I don’t know how to send depth and color information through the network and ensure the accuracy of the information. Can you provide some cases or tutorials?
Please sign in to leave a comment.