To know where the robot is in the map, you want to transform the origin of the base_link
frame into the map
frame.
In addition to the map, SLAM systems (gmapping and karto) publish transform data. In particular, they provide the transform from the map
frame to the odom
frame. Another node (which one will depend on what kind of robot you're using) should already be publishing the transform from the odom
frame to the base_link
frame.
The tf library will chain those transforms together for you. The details will depend on whether you're writing in C++ or Python; check the tf documentation. As an example, from the command-line, you can do this while the SLAM system is running:
rosrun tf tf_echo map base_link
Note: frames names are configurable, and your system may use different names from the ones I gave above. But the semantics of them is the same.
Given the robot's pose in the map
frame, if you want the corresponding index into the occupancy grid map, you'd do something like this:
grid_x = (unsigned int)((map_x - map.info.origin.position.x) / map.info.resolution)
grid_y = (unsigned int)((map_y - map.info.origin.position.y) / map.info.resolution)
In principle, you should also account for the orientation of the origin (map.info.origin.orientation
), but in practice there's no rotation.