ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

It appears that this is possible by

  1. remapping the output of the visual odometry from odom to e.g. /vo,
  2. reading this in to robot_localization along with other data sources for fusion,
  3. republishing from there to e.g. /odometry/filtered, and then
  4. telling the rtabmap node to use that instead of odom.

There may also be some tf frames that need adjusting.

I gather this from this the provided sensor_fusion.launch file, as well as these two forum threads.

I'll edit this answer once I've actually tried this approach.

It appears that this is possible by

  1. remapping the output of the visual odometry from odom to e.g. /vo,
  2. reading this in to robot_localization along with other data sources for fusion,
  3. republishing from there to e.g. /odometry/filtered, and then
  4. telling the rtabmap node to use that instead of odom.

There may also be some tf frames that need adjusting.

I gather this from this the provided sensor_fusion.launch file, as well as these two forum threads.

I'll edit this answer once I've actually tried this approach.

It appears that this is possible by

  1. remapping the output of the visual odometry from odom to e.g. /vo,
  2. reading this in to robot_localization along with other data sources for fusion,
  3. republishing from there to e.g. /odometry/filtered, and then
  4. telling the rtabmap node to use that instead of odom.

There may also be some tf frames that need adjusting.

I gather this from the provided sensor_fusion.launch file, as well as these two forum threads.

I'll edit this answer once I've actually tried this approach.

Edit: Well, this approach did work, but a major difficulty I had was tuning the covariance on my IMU measurements, to be fused with robot_localization, and also specifying just what types of information should be fused. In particular, I never really decided whether I should include accelerometer data or not, or whether I should just let it be used for orientation computation (via gravity vector subtraction).

I tried some wonky things, like allowing a settling time on startup to read the magnitude of the accelerator vector, and then scaling it to be properly g (which it wasn't naturally).

Additionally, I fused wheel odometry, which initially seemed to help a lot with smoothness of localization, but I think eventually caused drift because my turning angle wasn't, in reality, what I reported it as.

I now have replaced my Kinect device with an Intel RealSense R435 and T265, and I'm using the localization stream that the black-box T265 provides directly as an odometry source, with visual odometry in RTAB-map turned off completely. This works quite well. I haven't yet brought back the wheel odometry, since I expect it still to cause drift unless I fix that steering-bias problem. The realsense ROS packages provide their own way to ingest external odometry information (such as these wheel/steering measurements), so this would be a complete replacement for robot_localization.