ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
0

Pointcloud_to_laserscan ranges angular min and max?

asked 2011-04-15 13:45:11 -0600

Bart gravatar image

updated 2016-10-24 09:03:31 -0600

ngrennan gravatar image

The pointcloud_to_laserscan package in the turtlebot stack provides the cloud_to_scan.cpp nodelet. This program defines the laserscan angular field of view from -90 degrees to +90 degrees (180 total), with a 0.5 degree angular spacing between beams. The Kinect has a field of view of only 55 to 57 degrees. The extra range values on either side of the sensor_msgs::LaserScan message are padded with max_range+1. Is there a technical reason for the extra range values being passed around?

In the amcl_turtlebot launch file, laser_max_beams is defined as value="30". How does this relate to the 110 range readings in the Kinect laserscan message?

edit retag flag offensive close merge delete

2 Answers

Sort by ยป oldest newest most voted
5

answered 2011-04-16 10:11:19 -0600

AHornung gravatar image

amcl is not using the whole laser scan, that's usually not done in MCL. The sensor model of the localization assumes conditionally independent points in a single laser scan. If you use a whole scan (no matter what sensor, a Hokuyo for example outputs more than 700 in one horizontal scan), neighboring points will be highly correlated which makes your localization overly confident. A common workaround is to use the scan sparsely, e.g. in intervals of 5 or 10 degrees only. This also speeds of localization.

For amcl that means that it will will divide your ("fake") laser range of 180 evenly by 30, using one beam every 6 degrees.

edit flag offensive delete link more
1

answered 2011-04-16 05:33:51 -0600

Bart gravatar image

Reducing the laserscan width to the actual field of view of the Kinect resulted in a truncation of the laserscan points to either side when the Kinect is panned. Since laserscan_to_pointcloud is simulating a fixed mounted laser, when the Kinect is panned, the new ranges are stored in the extra vector space on either side and display properly in rviz.

It may be possible to pass a narrow laserscan over a wireless link and assemble the wider laserscan in a separate node on the remote computer, but the programming overhead is likely not worth saving the message overhead. The code to build up a fully filled out 180 degree scan from three panned 56 degree Kinect scans is complicated regardless. The simple answer may be to overlay three partially populated 180 degree scans, rather than work with three 56 degree scans

I am still interested in any comments regarding the relationship between the navigation stack components and laserscan geometry.

edit flag offensive delete link more

Comments

How did you end up truncating the returned data? I am having some difficulty if data I am receiving came from the edges of the field of view.

haelannaleah gravatar image haelannaleah  ( 2017-03-10 12:13:05 -0600 )edit

Question Tools

Stats

Asked: 2011-04-15 13:45:11 -0600

Seen: 1,514 times

Last updated: Apr 16 '11