RGB image captured by Intel realsense camera is dark (using python code)
Hi,
I am using the python code below to take RGB image using D435i camera. The image captured by the python code is dark. However, the image is not dark when I use the camera's SDK. How can I take image with the same quality as the image captured by the camera's SDK?
Thank you for your help in advance.
Abbas
import pyrealsense2 as rs
import numpy as np
import cv2
import time
import math
pipeline = rs.pipeline()
config = rs.config()
config.enable_stream(rs.stream.depth, 1280, 720, rs.format.z16, 30)
config.enable_stream(rs.stream.color, 1280, 720, rs.format.bgr8, 30)
profile = pipeline.start(config)
depth_sensor = profile.get_device().first_depth_sensor()
depth_scale = depth_sensor.get_depth_scale()
# We will be removing the background of objects more than
# clipping_distance_in_meters meters away
clipping_distance_in_meters = 1.5
clipping_distance = clipping_distance_in_meters / depth_scale
align_to = rs.stream.color
align = rs.align(align_to)
frames = pipeline.wait_for_frames()
aligned_frames = align.process(frames)
aligned_depth_frame = aligned_frames.get_depth_frame()
color_frame = aligned_frames.get_color_frame()
depth_image = np.asanyarray(aligned_depth_frame.get_data())
color_image = np.asanyarray(color_frame.get_data())
# Remove background - Set pixels further than clipping_distance to grey
grey_color = 153
depth_image_3d = np.dstack((depth_image,depth_image,depth_image)) #depth image is 1 channel, color is 3 channels
bg_removed = np.where((depth_image_3d > clipping_distance) | (depth_image_3d <= 0), grey_color, color_image)
# Render images
depth_colormap = cv2.applyColorMap(cv2.convertScaleAbs(depth_image, alpha=0.03), cv2.COLORMAP_JET)
images = np.hstack((bg_removed, depth_colormap))
cv2.namedWindow('Align Example', cv2.WINDOW_AUTOSIZE)
# Filename
path = 'C:/Users/aatefi2/Desktop/Intel real sense/Codes/'
imageName1 = str(time.strftime("%Y_%m_%d_%H_%M_%S")) + '_Color.jpg'
imageName2 = str(time.strftime("%Y_%m_%d_%H_%M_%S")) + '_Depth.jpg'
imageName3 = str(time.strftime("%Y_%m_%d_%H_%M_%S")) + '_bg_removed.jpg'
imageName4 = str(time.strftime("%Y_%m_%d_%H_%M_%S")) + '_ColorDepth.jpg'
imageName5 = str(time.strftime("%Y_%m_%d_%H_%M_%S")) + '_DepthColormap.jpg'
# Saving the image
cv2.imwrite(imageName1, color_image)
cv2.imwrite(imageName2, depth_image)
cv2.imwrite(imageName3, images)
cv2.imwrite(imageName4, bg_removed )
cv2.imwrite(imageName5, depth_colormap )
key = cv2.waitKey(1)
# Press esc or 'q' to close the image window
cv2.destroyAllWindows()
pipeline.stop()
-
Hi Aatefi2 In another Python case with dark RGB (the only other case that my research could find), Dorodnic the RealSense SDK Manager suggests that it may be related to auto-exposure convergence time.
https://github.com/IntelRealSense/librealsense/issues/5502#issuecomment-568194114
Further comment about convergence time can be found in the link below:
https://github.com/IntelRealSense/librealsense/issues/2104#issuecomment-411458598
If auto-exposure is being used then it can take several frames after the pipeline is opened for the auto-exposure to settle down. Setting manual exposure values can avoid this.
https://github.com/IntelRealSense/librealsense/issues/2269#issuecomment-414241301
-
Hi MartyG,
Thank you for your help and also sharing the useful links. I added the code below to change the exposure time manually (https://github.com/IntelRealSense/librealsense/issues/4449). The images (captured by this manual setting) are brighter than the ones captured by auto-exposure setting.
Thanks,
Abbas
# The code to set exposure time manually#
profile = pipeline.start(config)
# Get the sensor once at the beginning. (Sensor index: 1)
sensor = pipeline.get_active_profile().get_device().query_sensors()[1]
# Set the exposure anytime during the operation
sensor.set_option(rs.option.exposure, 156.000) -
You are very welcome Aatefi2 - thanks very much for the update and the sharing of your code wth the RealSense community!
Please sign in to leave a comment.
Comments
3 comments