ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange |
1 | initial version |
I'm currently working on a similar project. I use the laser_assembler package, which is a part of the laser_pipeline, to make point clouds out of laser scans. You should be able to install it using sudo apt-get install ros-indigo-laser-pipeline
, assuming indigo is your distro. Octomap_server requires a PointCloud2 to make the map, so you'll have to adapt the periodic_snapshotter in the laser_assembler github page to use PointCloud2 instead of PointCloud.
2 | No.2 Revision |
I'm currently working on a similar project. I use the laser_assembler package, which is a part of the laser_pipeline, to make point clouds out of laser scans. You should be able to install it using sudo apt-get install ros-indigo-laser-pipeline
, assuming indigo is your distro. Octomap_server requires a PointCloud2 to make the map, so you'll have to adapt the periodic_snapshotter in the laser_assembler github page to use PointCloud2 instead of PointCloud.
[EDIT] Whatever laser driver package you are doing publishes the laser scans as a sensor_msgs/LaserScan message on the /scan topic. Laser_assembler subscribes to /scan, reads the LaserScan messages, and converts the range data into sensor_msgs/PointCloud messages (an x, y, z value) in the fixed frame that you specify when you start the node. The point cloud is kept waits for a service call to call them. The periodic_snapshotter example in the github repo I linked you to calls the assemble_scans service, which collects all of the points assembled since the last service call and publishes them on the assembled_cloud topic. As octomap_server requires sensor_msgs/PointCloud2 messages, you will need to adapt periodic_snapshotter to call the assemble_scans2 service and publish PointCloud2 messages. I would also suggest increasing the rate as periodic_snapshotter currently calls the service every 5 seconds. Once you are publishing the point clouds, you run octomap_server. It subscribes to the cloud_in topic (though you can remap that) and builds the octomap.
Another thing you need to look at is some way to determine the height of the sensor, as there is no z value to a laser scan. As well, I would suggest using an IMU and the hector_imu_attitude_to_tf package, which allows the system to know the true orientation of your robot in order to get the most accurate map.
3 | No.3 Revision |
I'm currently working on a similar project. I use the laser_assembler package, which is a part of the laser_pipeline, to make point clouds out of laser scans. You should be able to install it using sudo apt-get install ros-indigo-laser-pipeline
, assuming indigo is your distro. Octomap_server requires a PointCloud2 to make the map, so you'll have to adapt the periodic_snapshotter in the laser_assembler github page to use PointCloud2 instead of PointCloud.
[EDIT]
[EDIT1]
Whatever laser driver package you are doing publishes the laser scans as a sensor_msgs/LaserScan message on the /scan topic. Laser_assembler subscribes to /scan, reads the LaserScan messages, and converts the range data into sensor_msgs/PointCloud messages (an x, y, z value) in the fixed frame that you specify when you start the node. The point cloud is kept waits for a service call to call them. The periodic_snapshotter example in the github repo I linked you to calls the assemble_scans service, which collects all of the points assembled since the last service call and publishes them on the assembled_cloud topic. As octomap_server requires sensor_msgs/PointCloud2 messages, you will need to adapt periodic_snapshotter to call the assemble_scans2 service and publish PointCloud2 messages. I would also suggest increasing the rate as periodic_snapshotter currently calls the service every 5 seconds. Once you are publishing the point clouds, you run octomap_server. It subscribes to the cloud_in topic (though you can remap that) and builds the octomap.
Another thing you need to look at is some way to determine the height of the sensor, as there is no z value to a laser scan. As well, I would suggest using an IMU and the hector_imu_attitude_to_tf package, which allows the system to know the true orientation of your robot in order to get the most accurate map.
[EDIT2] You use a launch file to run multiple nodes simultaneously. Launch files run the ROS master, allows you to set parameters, lets you remap topics, and opens multiple nodes at once. Some nodes you may have to write yourself, such as the one adding height to the transform tree, but for the most part you just have to worry about setting parameters and maybe remapping topics.