ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange |
1 | initial version |
The difference between those is how/what they measure.
Odometry, IMU and Visual Odometry (I guess this what you mean with vo
) just measure the internal state of the robot, and thus just deliver relative measurements towards a starting pose and cannot correct for long-term drifts. However, you can fuse those using the robot_pose_ekf
to achieve a more stable "fused" odometry guess.
Then, you need a localization that is providing measurements with respect to "world". This can be GPS, IPS, cameras with stored, localized features or laserscanners with a given map. With those, you can correct the drift of the "fused" odometry.
There are several packages providing this or parts of this functionality, e.g. robot_localization
or amcl
, to name just two.
2 | No.2 Revision |
The difference between those is how/what they measure.
Odometry, IMU and Visual Odometry (I guess this what you mean with vo
) just measure the internal state of the robot, and thus just deliver relative measurements towards a starting pose and cannot correct for long-term drifts. However, you can fuse those using the robot_pose_ekf
to achieve a more stable "fused" odometry guess.
Then, you need a localization that is providing measurements with respect to "world". This can be GPS, IPS, cameras with stored, localized features or laserscanners with a given map. With those, you can correct the drift of the "fused" odometry.
There are several packages providing this or parts of this functionality, e.g. robot_localization
or amcl
, to name just two.
EDIT
You are right, I did not think of adding a GPS sensor like this. Seems like I misunderstood how they define visual odometry here. However, a world fixed frame does not mean that this is fixed over multiple runs. Typically, any odometry starts of from the pose of the robot where it is turned on. In contrast there are fixed frames (like map coordinates) that are the same whether you turn this on or not.
Thus, vo
provides measurements to the vo
frame which can different any time you launch the robot, depending on what you use for input.
3 | No.3 Revision |
The difference between those is how/what they measure.
Odometry, IMU and Visual Odometry (I guess this what you mean with vo
) just measure the internal state of the robot, and thus just deliver relative measurements towards a starting pose and cannot correct for long-term drifts. However, you can fuse those using the robot_pose_ekf
to achieve a more stable "fused" odometry guess.
Then, you need a localization that is providing measurements with respect to "world". This can be GPS, IPS, cameras with stored, localized features or laserscanners with a given map. With those, you can correct the drift of the "fused" odometry.
There are several packages providing this or parts of this functionality, e.g. robot_localization
or amcl
, to name just two.
EDIT
You are right, I did not think of adding a GPS sensor like this. Seems like I misunderstood how they define visual odometry here. However, a world fixed frame does not mean that this is fixed over multiple runs. Typically, any odometry starts of from the pose of the robot where it is turned on. In contrast there are fixed frames (like map coordinates) that are the same whether you turn this on or not.
Thus, vo
provides measurements to the vo
frame which can different any time you launch the robot, depending on what you use for input.
EDIT 2
Typically, when you add a sensor providing "global corrections", you don't correct the odom
frame. The tf odom->base_link
is what is typically provided by internal sensors, i.e. wheel encoders, IMU and visual odometry.
If you have another sensor (GPS, laserscanner, ...) I would prefer to adapt the tf map->odom
such that the tree map->odom->base_link
is correct. This is how it is typically done for mobile robots in ROS, thus I'd prefer this solution.
4 | No.4 Revision |
The difference between those is how/what they measure.
Odometry, IMU and Visual Odometry (I guess this what you mean with vo
) just measure the internal state of the robot, and thus just deliver relative measurements towards a starting pose and cannot correct for long-term drifts. However, you can fuse those using the robot_pose_ekf
to achieve a more stable "fused" odometry guess.
Then, you need a localization that is providing measurements with respect to "world". This can be GPS, IPS, cameras with stored, localized features or laserscanners with a given map. With those, you can correct the drift of the "fused" odometry.
There are several packages providing this or parts of this functionality, e.g. robot_localization
or amcl
, to name just two.
EDIT
You are right, I did not think of adding a GPS sensor like this. Seems like I misunderstood how they define visual odometry here. However, a world fixed frame does not mean that this is fixed over multiple runs. Typically, any odometry starts of from the pose of the robot where it is turned on. In contrast there are fixed frames (like map coordinates) that are the same whether you turn this on or not.
Thus, vo
provides measurements to the vo
frame which can different any time you launch the robot, depending on what you use for input.
EDIT 2
Typically, when you add a sensor providing "global corrections", you don't correct the odom
frame. The tf odom->base_link
is what is typically provided by internal sensors, i.e. wheel encoders, IMU and visual odometry.
If you have another sensor (GPS, laserscanner, ...) I would prefer to adapt the tf map->odom
such that the tree map->odom->base_link
is correct. This is how it is typically done for mobile robots in ROS, thus I'd prefer this solution.
EDIT 3
This is where the magic happens in amcl.
You can use the TransformPose
function of TF to get map->odom
(called odom_to_map
therein) from the map->base_link
that you estimate, and broadcast this (after you bring it into the correct format...)