The paragraph below the breakdown of that line of code states:
Here we'll set some velocities that
will cause the "base_link" frame to
move in the "odom" frame at a rate of
0.1m/s in the x direction, -0.1m/s in the y direction, and 0.1rad/s in the
th direction. This will more or less
cause our fake robot to drive in a
circle.
Basically the robot will move those amounts with every time step.
nav_msgs/odometry
messages take the following form:
# This represents an estimate of a position and velocity in free space.
# The pose in this message should be specified in the coordinate frame given by header.frame_id.
# The twist in this message should be specified in the coordinate frame given by the child_frame_id
Header header
string child_frame_id
geometry_msgs/PoseWithCovariance pose
geometry_msgs/TwistWithCovariance twist
You can find information concerning geometry_msgs
here.
In a real situation, a robot driver, like p2os
or turtlebot
, will extract the position and velocity information (or data from the motor encoders) and will produce the nav_msgs/odometry
. If you have built your own robot, you will have to create a node which extracts motor encoder data and publishes the nav_msgs/odometry
. Take a look at other robot's driver_nodes
for an example.