RuntimeError: Error occured during execution of the processing block! See the log for more info
it works normally when I run the code from "https://github.com/IntelRealSense/librealsense/blob/development/wrappers/python/examples/opencv_pointcloud_viewer.py". However,But when I added a line of code, got the wrong as the title. The code is following.
depth_image = np.asanyarray(depth_frame.get_data())
color_image = np.asanyarray(color_frame.get_data())
depth_colormap = np.asanyarray(colorizer.colorize(depth_frame).get_data()) # 深度图像的映射
The code I added is "depth_list.append(depth_image) # "
Hope you can answer me!
Using append with a Python array can cause the program to fail after 15 frames unless frames are saved into memory using the RealSense SDK's Keep() instruction. The discussion in the link below provides more information about this.
Thank you very much. I have solved the last problem. But I got a error when I saved the "depth_frame" as npz file. The code and error is as follows. Besides, I can't find a solution from https://github.com/IntelRealSense/librealsense/issues/6164. Hope you can help me.
color_frame = aligned_frames.get_color_frame()
TypeError: cannot pickle 'pyrealsense2.pyrealsense2.video_frame' object
There is a script shared by a RealSense user for saving the color data as a PNG and the depth data as an array of scaled matrices saved as an npy file.
There was also an attempt to save depth data to an npz file.
My research was unable to find a reference that confirmed the possibility of saving color data to an npy or npz file though. Expanding the research beyond RealSense to a general search for 'save image to npz', the link below suggests Python code for converting image files to npz once they have been created. Conceivably, you could adapt the code in the 'save as PNG and npy' script to save the color data as a PNG and then convert the file to npz.
Sorry, I still can't store the camera's depth_frame. I decide to store only image information. However, how should I handle frame data to achieve the filtering effect of Realsense Viewer. Besides, How should I use infrared data and RGB data to synthesize point cloud data?
Thank you for your patience!
I'd like to ask you another question. I want to shoot for 30 minutes and save every minute. However, whether I store RGB data in npz format or bag format, it will take a long time. I want to ask, is there a way to solve that? Besides, I won't ROS, and I haven't found the answer from the website below.
In regard to your first question: the RealSense Viewer applies a range of post-processing filters and depth colorization settings by default. When creating your own script, these are not included by default and have to be deliberately programmed into the application.
A list of resources for setting depth colorization in Python can be found at the link below.
In the link below, Intel provide a Python tutorial for configuring post-processing filters.
You can read more about post-processing filters here:
In regard to generating a point cloud, typically you would only need the depth and RGB streams to produce a textured point cloud (one where the RGB color data is mapped to the depth points), like with the opencv_pointcloud_viewer.py example that you referenced in your opening message.
If you wish to explore the possibility of aligning depth, color and infrared together on a single image then the Python discussion in the link below may be helpful (though it is a 2D image and not a point cloud).
In regard to taking a timed capture every 'x' minutes, the Python scripting provided in the link below for this purpose will hopefully meet your needs.
Thank you for helping me find so many cases, but it's not very friendly for a novice. Since you know these programming well, why not write a basic development document about pyrealsense2. Thank you for the code provided by some great gods in the community, but I can't understand some codes, such as Class Appstate in opencv_pointcloud_viewer.py
profile = pipeline.get_active_profile()
depth_profile = rs.video_stream_profile(profile.get_stream(rs.stream.depth))
depth_intrinsics = depth_profile.get_intrinsics()
If there are explanations or cases of these methods, I think I can get started quickly
I am not involved in RealSense documentation activities. There is some starter information at the link below though.
There is also a starter program here:
The official pyrealsense2 programming documentation is here:
Sometimes the official C++ programming documentation provides a better description of a particular instruction though.
In regard to the specific instructions that you highlighted:
You can find more information about intrinsics - and also extrinsics - in the RealSense SDK's Projection documentation.
It looks as though the results are likely due to insufficient lighting in the location. The camera will have difficulty reading depth information from dark grey or black areas on the image. This is because there is a general physics principle (not specific to RealSense) that dark grey and black areas in a scene absorb light. So the dimly lit floor areas are being mis-identified as being in the far distance, and the black areas represent areas of the image where there is little to no depth detail because the darkness means that the camera cannot analyze those areas for depth.
If you are not able to add a stronger overhead light source because it would disturb the pigs, an alternative would be to add an external infrared illuminator lamp. Information about these lamps can be found in the link below.
In the absence of a light source, the 400 Series cameras can use a semi-random dot pattern cast from their internal projector component onto surfaces in the scene to analyze the dots for depth detail instead.
The range of the dot patten projection can be boosted by using an external pattern projector with higher power on the ceiling, as described in Intel's white-paper document on projectors.
If you are not able to add an external projector then you could instead try maximizing the camera's Laser Power setting to make the camera's infrared dot pattern projection more visible to the camera when projected to the ground from an overhead position. The maximum Laser Power value is '360', whilst the default setting is '150'. The link below provides a Python example of setting the Laser Power value with the rs.option.laser_power SDK instruction.
On the camera with the worse depth image, please try enabling the Hole-Filling Filter in the Post-Processing Filter category if it is not enabled already in order to try to close the empty gaps.
You could also change the Visual Preset option from the default 'Custom' to the 'Medium Density' setting to provide a balance between accuracy and the amount of detail on the depth image.
I just want to say thank you and thank you and thank you. First, I solved this problem by plugging in and out the USB again. Because the camera is far away from me, the effect of this method is not known until now. Second， The hole filling filter is very useful. Do you know how can I achieve this effect by using pyhton code.
Thank you very much!
You are very welcome!
The discussion in the link below about implementing hole-filling in Python will hopefully be of help to you.
Please sign in to leave a comment.