PCL cloud view is not correct if the frame read using syncer class & depth_sensor.on_video_frame APIs
This is regarding difference between the PCL cloud generated using .../librealsense/wrappers/pcl/pcl-color/rs-pcl-color.cpp and other using syncer class despite the fact that we are using exactly same logic for transformation of the below depth+RGB --> RS PCL points--> Opensource PCL cloud point
Our usecase demands that we have the pixelbytes of depth and RGB frames obtained from Realsense D435 camera from a ingestion module. Hence we need to extract rs2_depth and rs_rgb frames out of it. Eventually these frames can be used later to call to_map(), calculate_depth(), RGB texture mapping and generate a opensource PCL library complaint point cloud and convert that to PCD file and eventually see using PCL visulizer.
We came to know that for converting from pixelbytes of a frame to corresponding rs2 frame object and frameset object, one has to use syncer class and on_video_frame APIs. there is no other easy way(Single API from RS2) to do that. e.g. ...librealsense/examples/software-device/rs-software-device.cpp. Is it correct or there are some other way exists too ?
But once we use above method and convert these frames to generate PCL point clouds as explained in .../librealsense/wrappers/pcl/pcl-color/rs-pcl-color.cpp( we didnt take the frame read logic from here i.e. from camera directly) we see the images looks different.
Image visualized using rs-pcl-color.cpp -- looks perfect as below
But using syncer class and PCL conversion logic from rs-pcl.color.cpp file the image looks as below
It has rgb component not mapped well.
Thanks in advance.
We have altered the existing rs-pcl-coor to read from from syncer class instead of directly from camera. exact code pasted below. I didnt see how can i attach a code file So it makes it too big a post.
/***********************************************************
* Author: Daniel Tran
* Liam Gogley
*
* Purpose: The following .cpp file will utilize the Intel
* realsense camera to stream and capture frame
* data of the environment. Color is then applied
* and a point cloud is generated and saved to
* a point cloud data format (.pcd).
*
* Version 0.09 - Last Editted 11/07/18
*
* Rev: Implementation of RGB Texture function to map
* color to point cloud data.
*
***********************************************************/
#include <iostream>
#include <algorithm>
#include <boost/date_time/posix_time/posix_time.hpp>
#include <boost/thread/thread.hpp>
#include <string>
// Intel Realsense Headers
#include <librealsense2/rs.hpp> // Include RealSense Cross Platform API
#include <librealsense2/hpp/rs_internal.hpp>
#include <opencv2/opencv.hpp> // Include OpenCV API
// PCL Headers
#include <pcl/io/pcd_io.h>
#include <pcl/point_types.h>
#include <pcl/filters/passthrough.h>
#include <pcl/common/common_headers.h>
#include <pcl/features/normal_3d.h>
#include <pcl/visualization/pcl_visualizer.h>
#include <pcl/console/parse.h>
#include <boost/thread/thread.hpp>
#include <pcl/io/io.h>
#include <pcl/visualization/cloud_viewer.h>
using namespace std;
typedef pcl::PointXYZRGB RGB_Cloud;
typedef pcl::PointCloud<RGB_Cloud> point_cloud;
typedef point_cloud::Ptr cloud_pointer;
typedef point_cloud::Ptr prevCloud;
// Prototypes
void Load_PCDFile(void);
bool userInput(void);
void cloudViewer(void);
// Global Variables
string cloudFile; // .pcd file name
string prevCloudFile; // .pcd file name (Old cloud)
int i = 1; // Index for incremental file name
//======================================================
// RGB Texture
// - Function is utilized to extract the RGB data from
// a single point return R, G, and B values.
// Normals are stored as RGB components and
// correspond to the specific depth (XYZ) coordinate.
// By taking these normals and converting them to
// texture coordinates, the RGB components can be
// "mapped" to each individual point (XYZ).
//======================================================
std::tuple<int, int, int> RGB_Texture(rs2::video_frame texture, rs2::texture_coordinate Texture_XY)
{
// Get Width and Height coordinates of texture
int width = texture.get_width(); // Frame width in pixels
int height = texture.get_height(); // Frame height in pixels
// Normals to Texture Coordinates conversion
int x_value = min(max(int(Texture_XY.u * width + .5f), 0), width - 1);
int y_value = min(max(int(Texture_XY.v * height + .5f), 0), height - 1);
int bytes = x_value * texture.get_bytes_per_pixel(); // Get # of bytes per pixel
int strides = y_value * texture.get_stride_in_bytes(); // Get line width in bytes
int Text_Index = (bytes + strides);
const auto New_Texture = reinterpret_cast<const uint8_t*>(texture.get_data());
// RGB components to save in tuple
int NT1 = New_Texture[Text_Index];
int NT2 = New_Texture[Text_Index + 1];
int NT3 = New_Texture[Text_Index + 2];
return std::tuple<int, int, int>(NT1, NT2, NT3);
}
//===================================================
// PCL_Conversion
// - Function is utilized to fill a point cloud
// object with depth and RGB data from a single
// frame captured using the Realsense.
//===================================================
cloud_pointer PCL_Conversion(const rs2::points& points, const rs2::video_frame& color){
// Object Declaration (Point Cloud)
cloud_pointer cloud(new point_cloud);
// Declare Tuple for RGB value Storage (<t0>, <t1>, <t2>)
std::tuple<uint8_t, uint8_t, uint8_t> RGB_Color;
//================================
// PCL Cloud Object Configuration
//================================
// Convert data captured from Realsense camera to Point Cloud
auto sp = points.get_profile().as<rs2::video_stream_profile>();
cloud->width = static_cast<uint32_t>( sp.width() );
cloud->height = static_cast<uint32_t>( sp.height() );
cloud->is_dense = false;
cloud->points.resize( points.size() );
auto Texture_Coord = points.get_texture_coordinates();
auto Vertex = points.get_vertices();
// Iterating through all points and setting XYZ coordinates
// and RGB values
for (int i = 0; i < points.size(); i++)
{
//===================================
// Mapping Depth Coordinates
// - Depth data stored as XYZ values
//===================================
cloud->points[i].x = Vertex[i].x;
cloud->points[i].y = Vertex[i].y;
cloud->points[i].z = Vertex[i].z;
// Obtain color texture for specific point
RGB_Color = RGB_Texture(color, Texture_Coord[i]);
// Mapping Color (BGR due to Camera Model)
cloud->points[i].r = get<2>(RGB_Color); // Reference tuple<2>
cloud->points[i].g = get<1>(RGB_Color); // Reference tuple<1>
cloud->points[i].b = get<0>(RGB_Color); // Reference tuple<0>
}
return cloud; // PCL RGB Point Cloud generated
}
int main() try
{
//======================
// Variable Declaration
//======================
bool captureLoop = true; // Loop control for generating point clouds
//====================
// Object Declaration
//====================
pcl::PointCloud<pcl::PointXYZRGB>::Ptr newCloud (new pcl::PointCloud<pcl::PointXYZRGB>);
boost::shared_ptr<pcl::visualization::PCLVisualizer> openCloud;
// Declare pointcloud object, for calculating pointclouds and texture mappings
rs2::pointcloud pc;
rs2::points points;
// Declare RealSense pipeline, encapsulating the actual device and sensors
rs2::pipeline pipe;
// Create a configuration for configuring the pipeline with a non default profile
rs2::config cfg;
//======================
// Stream configuration with parameters resolved internally. See enable_stream() overloads for extended usage
//======================
cfg.enable_stream(RS2_STREAM_COLOR);
cfg.enable_stream(RS2_STREAM_INFRARED);
cfg.enable_stream(RS2_STREAM_DEPTH);
rs2::pipeline_profile selection = pipe.start(cfg);
rs2::device selected_device = selection.get_device();
auto depth_sensor = selected_device.first<rs2::depth_sensor>();
if (depth_sensor.supports(RS2_OPTION_EMITTER_ENABLED))
{
depth_sensor.set_option(RS2_OPTION_EMITTER_ENABLED, 1.f); // Enable emitter
pipe.wait_for_frames();
depth_sensor.set_option(RS2_OPTION_EMITTER_ENABLED, 0.f); // Disable emitter
}
if (depth_sensor.supports(RS2_OPTION_LASER_POWER))
{
// Query min and max values:
auto range = depth_sensor.get_option_range(RS2_OPTION_LASER_POWER);
depth_sensor.set_option(RS2_OPTION_LASER_POWER, range.max); // Set max power
sleep(1);
depth_sensor.set_option(RS2_OPTION_LASER_POWER, 0.f); // Disable laser
}
rs2::software_device dev;
std::cout << "1";
auto m_depth_sensor = dev.add_sensor("Depth");
auto m_color_sensor = dev.add_sensor("Color");
int m_sw_width = 640;
int m_sw_height = 480;
const int m_bpp = 2;
int m_frame_number;
struct software_device_frame
{
int x, y, bpp;
std::vector<uint8_t> frame;
};
software_device_frame m_sw_depth_frame;
software_device_frame m_sw_color_frame;
// For storing the profile of stream
rs2::stream_profile m_depth_stream;
rs2::stream_profile m_color_stream;
// For Synchronizing frames using the syncer class
rs2::syncer m_sync;
m_frame_number = 0;
m_sw_depth_frame.x = m_sw_width;
m_sw_depth_frame.y = m_sw_height;
m_sw_depth_frame.bpp = m_bpp;
m_sw_color_frame.x = m_sw_width;
m_sw_color_frame.y = m_sw_height;
m_sw_color_frame.bpp = m_bpp;
std::cout << "2";
// Before passing images into the device, provide details about the
// stream which is going to be simulated
rs2_intrinsics depth_intrinsics = { m_sw_depth_frame.x,
m_sw_depth_frame.y,
(float)m_sw_depth_frame.x / 2,
(float)m_sw_depth_frame.y / 2,
(float)m_sw_depth_frame.x ,
(float)m_sw_depth_frame.y ,
RS2_DISTORTION_BROWN_CONRADY ,{ 0,0,0,0,0 }};
m_depth_stream = m_depth_sensor.add_video_stream({ RS2_STREAM_DEPTH,
0,
0,
m_sw_width,
m_sw_height,
60,
m_bpp,
RS2_FORMAT_Z16,
depth_intrinsics });
rs2_intrinsics color_intrinsics = { m_sw_color_frame.x,
m_sw_color_frame.y,
(float)m_sw_color_frame.x / 2,
(float)m_sw_color_frame.y / 2,
(float)m_sw_color_frame.x / 2,
(float)m_sw_color_frame.y / 2,
RS2_DISTORTION_BROWN_CONRADY ,{ 0,0,0,0,0 }};
m_color_stream = m_color_sensor.add_video_stream({ RS2_STREAM_COLOR,
0,
1,
m_sw_width,
m_sw_height,
60,
m_bpp,
RS2_FORMAT_RGB8,
color_intrinsics });
// Adding sensor options
m_depth_sensor.add_read_only_option(RS2_OPTION_DEPTH_UNITS, 0.001f);
if (m_depth_sensor.supports(RS2_OPTION_EMITTER_ENABLED))
{
m_depth_sensor.set_option(RS2_OPTION_EMITTER_ENABLED, 1.f); // Enable emitter
pipe.wait_for_frames();
m_depth_sensor.set_option(RS2_OPTION_EMITTER_ENABLED, 0.f); // Disable emitter
}
if (m_depth_sensor.supports(RS2_OPTION_LASER_POWER))
{
// Query min and max values:
auto range = m_depth_sensor.get_option_range(RS2_OPTION_LASER_POWER);
m_depth_sensor.set_option(RS2_OPTION_LASER_POWER, range.max); // Set max power
sleep(1);
m_depth_sensor.set_option(RS2_OPTION_LASER_POWER, 0.f); // Disable laser
}
std::cout << "3";
// Specify synochronization model for using syncer class with synthetic streams
dev.create_matcher(RS2_MATCHER_DLR_C);
std::cout << "4";
// Open the sensor for respective stream
m_depth_sensor.open(m_depth_stream);
m_color_sensor.open(m_color_stream);
std::cout << "5";
// Start the sensor for passing frame to the syncer callback
m_depth_sensor.start(m_sync);
m_color_sensor.start(m_sync);
std::cout << "6";
// Assign extrinsic transformation parameters to a specific profile (sensor)
m_depth_stream.register_extrinsics_to(m_color_stream, { { 1,0,0,0,1,0,0,0,1 },{ 0,0,0 } });
using namespace cv;
const auto window_name = "Display Image";
namedWindow(window_name, WINDOW_AUTOSIZE);
// Begin Stream with default configs
// Loop and take frame captures upon user input
while(captureLoop == true) {
// Set loop flag based on user input
//captureLoop = userInput();
//if (captureLoop == false) { break; }
std::cout << "before frame drop";
// Wait for frames from the camera to settle
for (int i = 0; i < 30; i++) {
auto frames = pipe.wait_for_frames(); //Drop several frames for auto-exposure
}
std::cout << "7";
// Capture a single frame and obtain depth + RGB values from it
auto frames = pipe.wait_for_frames();
std::cout << "8";
auto depth = frames.get_depth_frame();
std::cout << "9";
auto RGB = frames.get_color_frame();
const int w = depth.as<rs2::video_frame>().get_width();
const int h = depth.as<rs2::video_frame>().get_height();
std::cout << "10";
// Create a rs2::depth_frame from the base64 decoded depth frame by ingesting it into the depth sensor
m_depth_sensor.on_video_frame({ (void*)depth.get_data(), // Frame pixels from capture API
[](void*) {}, // Custom deleter (if required)
m_sw_depth_frame.x * m_sw_depth_frame.bpp,
m_sw_depth_frame.bpp, // Stride and Bytes-per-pixel
(rs2_time_t)m_frame_number * 16,
RS2_TIMESTAMP_DOMAIN_HARDWARE_CLOCK, // Timestamp
m_frame_number, // Frame# for potential m_sync services
m_depth_stream});
std::cout << "11";
// Create a rs2::video_frame from the base64 decoded depth frame by ingesting it into the depth sensor
m_color_sensor.on_video_frame({ (void*)RGB.get_data(), // Frame pixels from capture API
[](void*) {}, // Custom deleter (if required)
m_sw_color_frame.x * m_sw_color_frame.bpp,
m_sw_color_frame.bpp, // Stride and Bytes-per-pixel
(rs2_time_t)m_frame_number * 16,
RS2_TIMESTAMP_DOMAIN_HARDWARE_CLOCK, // Timestamp
m_frame_number, // Frame# for potential m_sync services
m_color_stream});
++m_frame_number;
std::cout << "12";
// Wait for frameset from syncer class
rs2::frameset fset = m_sync.wait_for_frames();
// Return first found frame for the stream specified
std::cout << "13";
rs2::frame rs2_depth = fset.first_or_default(RS2_STREAM_DEPTH);
rs2::frame rs2_color = fset.first_or_default(RS2_STREAM_COLOR);
std::cout << "14";
if(!rs2_depth) {
cout << "nul depth" << endl;
continue;
}
points = pc.calculate(rs2_depth);
pc.map_to(rs2_color);
std::cout << "16";
cloud_pointer cloud;
// Convert generated Point Cloud to PCL Formatting
if (auto as_color = rs2_color.as<rs2::video_frame>()) {
std::cout << "inside as color";
cloud = PCL_Conversion(points, as_color);
}
std::cout << "17";
//========================================
// Filter PointCloud (PassThrough Method)
//========================================
pcl::PassThrough<pcl::PointXYZRGB> Cloud_Filter; // Create the filtering object
Cloud_Filter.setInputCloud (cloud); // Input generated cloud to filter
Cloud_Filter.setFilterFieldName ("z"); // Set field name to Z-coordinate
Cloud_Filter.setFilterLimits (0.0, 1.0); // Set accepted interval values
Cloud_Filter.filter (*newCloud); // Filtered Cloud Outputted
cloudFile = "PCL_Frame" + to_string(i) + ".pcd";
//==============================
// Write PC to .pcd File Format
//==============================
// Take Cloud Data and write to .PCD File Format
std::cout << "17";
cout << "Generating PCD Point Cloud File... " << endl;
pcl::io::savePCDFileASCII(cloudFile, *cloud); // Input cloud to be saved to .pcd
cout << cloudFile << " successfully generated. " << endl;
//Load generated PCD file for viewing
//Load_PCDFile();
i++; // Increment File Name
std::flush(std::cout);
}//End-while
cout << "Exiting Program... " << endl;
return EXIT_SUCCESS;
}
catch (const rs2::error & e)
{
std::cerr << "RealSense error calling " << e.get_failed_function() << "(" << e.get_failed_args() << "):\n " << e.what() << std::endl;
return EXIT_FAILURE;
}
catch (const std::exception & e)
{
std::cerr << e.what() << std::endl;
return EXIT_FAILURE;
}
void Load_PCDFile(void)
{
string openFileName;
// Generate object to store cloud in .pcd file
pcl::PointCloud<pcl::PointXYZRGB>::Ptr cloudView (new pcl::PointCloud<pcl::PointXYZRGB>);
openFileName = "Captured_Frame" + to_string(i) + ".pcd";
pcl::io::loadPCDFile (openFileName, *cloudView); // Load .pcd File
//==========================
// Pointcloud Visualization
//==========================
// Create viewer object titled "Captured Frame"
boost::shared_ptr<pcl::visualization::PCLVisualizer> viewer (new pcl::visualization::PCLVisualizer ("Captured Frame"));
// Set background of viewer to black
viewer->setBackgroundColor (0, 0, 0);
// Add generated point cloud and identify with string "Cloud"
viewer->addPointCloud<pcl::PointXYZRGB> (cloudView, "Cloud");
// Default size for rendered points
viewer->setPointCloudRenderingProperties (pcl::visualization::PCL_VISUALIZER_POINT_SIZE, 1, "Cloud");
// Viewer Properties
viewer->initCameraParameters(); // Camera Parameters for ease of viewing
cout << endl;
cout << "Press [Q] in viewer to continue. " << endl;
viewer->spin(); // Allow user to rotate point cloud and view it
// Note: No method to close PC visualizer, pressing Q to continue software flow only solution.
}
//========================================
// userInput
// - Prompts user for a char to
// test for decision making.
// [y|Y] - Capture frame and save as .pcd
// [n|N] - Exit program
//========================================
bool userInput(void){
bool setLoopFlag;
bool inputCheck = false;
char takeFrame; // Utilize to trigger frame capture from key press ('t')
do {
// Prompt User to execute frame capture algorithm
cout << endl;
cout << "Generate a Point Cloud? [y/n] ";
cin >> takeFrame;
cout << endl;
// Condition [Y] - Capture frame, store in PCL object and display
if (takeFrame == 'y' || takeFrame == 'Y') {
setLoopFlag = true;
inputCheck = true;
takeFrame = 0;
}
// Condition [N] - Exit Loop and close program
else if (takeFrame == 'n' || takeFrame == 'N') {
setLoopFlag = false;
inputCheck = true;
takeFrame = 0;
}
// Invalid Input, prompt user again.
else {
inputCheck = false;
cout << "Invalid Input." << endl;
takeFrame = 0;
}
} while(inputCheck == false);
return setLoopFlag;
}
-
Hi Lalatendu Das There is a lot of information to digest here. As you believe that the main symptom is incorrect RGB mapping, let's tackle that aspect first.
I located another case where the RGB was incorrect in relation to the depth frame when using syncer. The RealSense user in that case was advised to disable the Auto-Exposure Priority setting and ensure that both depth and color were set to the same FPS.
https://github.com/IntelRealSense/librealsense/issues/5675
When Auto-Exposure is enabled and Auto-Exposure Priority is disabled, it enforces a constant FPS speed.
If you are using C++ language, the code in the link below shows how to disable Auto-Exposure Priority, taking care to ensure that the command is directed at the RGB sensor (index '1') instead of the depth sensor (index '0').
-
Hi MartyG
Thanks for your response. The Auto-Exposure Priority disable didn't help much. The issue finally got fixed by setting correct bits per pixel value for RGB frame (i.e. 3 instead of same as what mentioned for depth) and applying the correct intrinsic values to software device by reading it from camera object and not hardcoding it. So we can close this thread.
Please sign in to leave a comment.
Comments
2 comments