ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange |
1 | initial version |
The 'world coordinates' of a pointcloud are represented in the transform data. If your pointcloud is in the sensor frame, and you have a tf transform (link) available to a map or world frame, you can use the tf library to transform the pointcloud into the desired frame, which will transform the positions of the points in the pointcloud.
The image you're talking about sounds like a projection of the pointcloud down to a 2D plane, which you can do by iterating over every point in the cloud (which you're already doing) and applying some linear algebra (look into the Eigen library).
2 | No.2 Revision |
The 'world coordinates' of a pointcloud are represented in the transform data. If your pointcloud is in the sensor frame, and you have a tf transform (link) available to a map or world frame, you can use the tf library to transform the pointcloud into the desired frame, which will transform the positions of the points in the pointcloud.
The image you're talking about sounds like a projection of the pointcloud down to a 2D plane, which you can do by iterating over every point in the cloud (which you're already doing) and applying some linear algebra (look into the Eigen library).library). To actually build the image you'll probably find OpenCV useful.
3 | No.3 Revision |
The 'world coordinates' of a pointcloud are represented in the transform data. If your pointcloud is in the sensor frame, and you have a tf transform (link) available to a map or world frame, you can use the tf library to transform the pointcloud into the desired frame, which will transform the positions of the points in the pointcloud.
The image you're talking about sounds like a projection of the pointcloud down to a 2D plane, which you can do by iterating over every point in the cloud (which you're already doing) and applying some linear algebra (look into the Eigen library). To actually build the image you'll probably find OpenCV useful.and this conversion useful, however you could build the sensor_msgs/Image directly.
4 | No.4 Revision |
The 'world coordinates' of a pointcloud are represented in the transform data. If your pointcloud is in the sensor frame, and you have a tf transform (link) available to a map or world frame, you can use the tf library to transform the pointcloud into the desired frame, which will transform the positions of the points in the pointcloud.
The image you're talking about sounds like a projection of the pointcloud down to a 2D plane, which you can do by iterating over every point in the cloud (which you're already doing) and applying some linear algebra (look into the Eigen library). To actually build the image you'll probably find OpenCV and this conversion useful, however you could build the sensor_msgs/Image directly.
EDIT: Found this projection API in PCL as well which should make it easier.