ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
1

robot_localization IMU and visual odometry from rtabmap

asked 2017-05-14 20:56:51 -0600

natejgardner gravatar image

I realize this topic has come up a lot, but I've read every related question on here and nothing is quite working yet.

I have a depth camera and an IMU. The IMU is rigidly attached to the camera. I'm using rtabmap to produce visual odometry and localize against a map I previously created, and I've created my own IMU node called Tacyhon for my invensense MPU 9250 board.

Here are my frames and rosgraph...

r_l frames

rosgraph

The acceleration and angular velocity seem to be working fine, however, orientation is relative to camera_link, not relative to map. Thus a 90 degree rotation results in robot_localization thinking that the IMU is facing 180 degrees from its starting location, since it is 90 degrees rotated from camera_link. How can I set my acceleration and angular velocity from the IMU topic to be relative to camera link, but my orientation data to be relative to map so that r_l can fuse the orientation from the IMU with the orientation from rtabmap?

Here is my launch file.

Any help would be appreciated!

<launch>
  <node pkg="sextant_tachyon_node"
        type="sextant_tachyon_node"
        name="tachyon_node" />
  <node name="tachyon_transform" pkg="tf" type="static_transform_publisher"
            args="0 0 0.005 0 0 0 camera_link tachyon_node 100" />

  <node pkg="robot_localization" 
        type="ukf_localization_node" 
        name="ukf_localization"
        clear_params="true"
        output="screen">
        <param name="frequency" value="100" />
        <param name="sensor_timeout" value="10" />
        <param name="map_frame" value="map" />
        <param name="odom_frame" value="odom" />
        <param name="base_link_frame" value="camera_link" />
        <param name="transform_time_offset" value="0.0" />

        <param name="imu0" value="/tachyon_node/imu" />
        <param name="odom0" value="/rtabmap/odom" />

        <rosparam param="imu0_config">
          [false, false, false,
           true, true, true,
           false, false, false,
           true, true, true,
           true, true, true]
        </rosparam>

        <rosparam param="odom0_config">
         [true, true, true,
          true, true, true,
          false, false, false,
          false, false, false,
          false, false, false]</rosparam>

        <param name="odom0_differential" value="false" />
        <param name="odom0_relative" value="false" />
        <param name="print_diagnostics" value="true" />

        <param name="odom0_queue_size" value="500" />

  </node>


</launch>
edit retag flag offensive close merge delete

1 Answer

Sort by ยป oldest newest most voted
1

answered 2017-07-25 09:47:15 -0600

Tom Moore gravatar image

The problem is that the sensor_msgs/Imu message contains only a single frame_id, yet most of them report data in two separate frames: linear acceleration and angular velocity are in the sensor frame, yet the orientation is in a world-fixed frame. r_l assumes that the frame_id is relative to the body frame, and so if you have your IMU mounted with a 90-degree offset, it's going to apply that offset. This makes sense in general, though: if I have a robot pointing at magnetic north, and its IMU is rotated so it's off by 90 degrees, that IMU is going to read 90 degrees, even though my robot is facing 0 degrees. r_l will correct for that, and tell you the robot is facing 0.

In your case, you have two options:

  1. Handle all the transformations of your IMU data onboard, since you wrote the driver anyway. Then you can just zero out the camera->IMU transform.
  2. You should do this regardless, but note that you are fusing yaw from two different sources. The yaw from your IMU presumably comes from a magnetometer. The rtabmap yaw value will likely start at 0 when you start running and your poses will be relative to that. Those two headings are not necessarily going to agree. As stated in the wiki, if you have two sources of pose information, choose the best one, and fuse the other one either (a) as a velocity, if available, or (b) enable differential mode for it.
edit flag offensive delete link more

Question Tools

2 followers

Stats

Asked: 2017-05-14 20:56:51 -0600

Seen: 1,950 times

Last updated: Jul 25 '17