ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange |
1 | initial version |
Yes, it is very much possible to combine DEPTH data from laser and RGB data from image to form a rgbd/color pointcloud using pcl library.
I hope you can define a x and y correspondence between each voxel of laser pointcloud and pixel from the image using tf. Given that laser pointcloud size and image size are same in x,y direction ..like in case of kinect it is 640X480. While writing the combined pointcloud, use PointXYZRGB structure Following is a small example code:
pcl::PointCloud<pcl::PointXYZRGB> cloud;
uint8_t r = 255, g = 0, b = 0; // Example: Red color
uint32_t rgb = ((uint32_t)r << 16 | (uint32_t)g << 8 | (uint32_t)b);
cloud.points[0].rgb = *reinterpret_cast<float*>(&rgb);
About how to write a simple pcd file, you can find it here