ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

As an example, I have a node that receives a point cloud, extracts an image and runs a vision algorithm on it, and then retrieves the XYZ coordinates of the detected features:

https://github.com/trainman419/targeting/blob/master/target_detector/src/targets.cpp#L219-L269

As an example, I have a node that receives a point cloud, extracts an image and runs a vision algorithm on it, and then retrieves the XYZ coordinates of the detected features:

https://github.com/trainman419/targeting/blob/master/target_detector/src/targets.cpp#L219-L269

The key things to pick out are the conversion from a structured point cloud to an image:

sensor_msgs::ImagePtr image = boost::make_shared<sensor_msgs::Image>();
pcl::toROSMsg( *cloud, *image );

And the return conversion from image coordinates to a 3D point from the cloud:

pcl::PointXYZ p1 = depth.at(x, y);