D435i Raspberry Pi network transmission
Hello
At present, the D435i camera has been directly connected to the server, and object information is recognized through the YOLO algorithm.
But I plan to collect the depth and color information of the realsense D435i camera through the Raspberry Pi, send it to the server through wireless or 5G network, and then use the YOLO algorithm for recognition.
Now cameras can be deployed on Raspberry Pi to collect information, but I don’t know how to send depth and color information through the network and ensure the accuracy of the information. Can you provide some cases or tutorials?
-
The RealSense SDK has a networking tool described at the link below that can be used with Raspberry Pi. It is primarily designed for ethernet cabling but can be adapted for wi-fi communication.
The tool was removed in SDK version 2.54.1 as it is planned that a new networking system will be replacing it in the next SDK version after 2.54.1. So an SDK version earlier than 2.54, such as 2.53.1, will need to be used if you wish to experiment with the tool.
Data sent from the Pi / camera to a central host computer can be accessed on the host through the RealSense Viewer tool or with program scripting.
-
There is an example Python program called net-viewer for this networking tool below.
*******************
## License: Apache 2.0. See LICENSE file in root directory.
## Copyright(c) 2021 Intel Corporation. All Rights Reserved.
###############################################
## Network viewer ##
###############################################
import sys
import numpy as np
import cv2
import pyrealsense2 as rs
import pyrealsense2_net as rsnet
if len(sys.argv) == 1:
print( 'syntax: python net_viewer <server-ip-address>' )
sys.exit(1)
ip = sys.argv[1]
ctx = rs.context()
print ('Connecting to ' + ip)
dev = rsnet.net_device(ip)
print ('Connected')
print ('Using device 0,', dev.get_info(rs.camera_info.name), ' Serial number: ', dev.get_info(rs.camera_info.serial_number))
dev.add_to(ctx)
pipeline = rs.pipeline(ctx)
# Start streaming
print ('Start streaming, press ESC to quit...')
pipeline.start()
try:
while True:
# Wait for a coherent pair of frames: depth and color
frames = pipeline.wait_for_frames()
depth_frame = frames.get_depth_frame()
color_frame = frames.get_color_frame()
if not depth_frame or not color_frame:
continue
# Convert images to numpy arrays
depth_image = np.asanyarray(depth_frame.get_data())
color_image = np.asanyarray(color_frame.get_data())
# Apply colormap on depth image (image must be converted to 8-bit per pixel first)
depth_colormap = cv2.applyColorMap(cv2.convertScaleAbs(depth_image, alpha=0.03), cv2.COLORMAP_JET)
depth_colormap_dim = depth_colormap.shape
color_colormap_dim = color_image.shape
# If depth and color resolutions are different, resize color image to match depth image for display
if depth_colormap_dim != color_colormap_dim:
resized_color_image = cv2.resize(color_image, dsize=(depth_colormap_dim[1], depth_colormap_dim[0]), interpolation=cv2.INTER_AREA)
images = np.hstack((resized_color_image, depth_colormap))
else:
images = np.hstack((color_image, depth_colormap))
# Show images
cv2.namedWindow('RealSense', cv2.WINDOW_AUTOSIZE)
cv2.imshow('RealSense', images)
k = cv2.waitKey(1) & 0xFF
if k == 27: # Escape
cv2.destroyAllWindows()
break
finally:
# Stop streaming
pipeline.stop()
print ("Finished") -
Thanks for your guidance
I deployed according to the plan. I successfully compiled the SDK 2.53 version on the Raspberry Pi. By running the rs-server tool, I successfully received the screen through realsense-viewer on my PC.
I now successfully compile SDK 2.53 on another PC (ubuntu system) and generate the following .o .so fileBut I am getting an error like below when running the py code you provided
The error shows that there is no library for pyrealsense_net, but I have successfully compiled this library file. Why is this? Can you provide a solution?
-
The link below has another case where this problem with pyrealsense2_net occurred.
https://github.com/IntelRealSense/librealsense/issues/9946
The RealSense user in that particular case solved it by replacing import pyrealsense2_net as rsnet with the instruction below.
from pyrealsense2 import pyrealsense2_net as rsnet
-
Use of the Python networking components 'pyrealsense2_net' and 'net_viewer.py' has a pattern of problems in past cases. If using 'from pyrealsense2' did not work then it is likely that pyrealsense2_net is not going to work for you, unfortunately, and it will not be fixed as development of this networking tool has ceased.
There is an alternative RealSense networking tool for Python called EtherSense but it has not been tested with a wireless connection.
https://github.com/krejov100/EtherSense
https://dev.intelrealsense.com/docs/depth-camera-over-ethernet-whitepaper
Please sign in to leave a comment.
Comments
7 comments