ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
0

Robot_localization: fuse absolute with relative measurements

asked 2019-08-08 10:31:28 -0600

cv_ros_user gravatar image

updated 2019-09-25 10:18:51 -0600

I want to fuse global pose from tag pose estimation with visual odometry and IMU.

I am using:

  • a RealSense D435i with realsense-ros that provides all the transformations between each sensor
  • and rtab_map for the visual odometry

Fusing relative measurements (visual odometry and IMU) looks correct with rviz:

ekf_local.yaml:

frequency: 30
sensor_timeout: 0.1
publish_tf: true
map_frame: map
odom_frame: odom
base_link_frame: camera_link
world_frame: odom

odom0: /odom
odom0_config: [true,  true,  true,
               true,  true,  true,
               true,  true,  true,
               true,  true,  true,
               true,  true,  true]
odom0_differential: false
odom0_relative: true

imu0: /imu/data
imu0_config: [false, false, false,
              true,  true,  true,
              false, false, false,
              true,  true,  true,
              true,  true,  true]

imu0_differential: true
imu0_relative: false
imu0_remove_gravitational_acceleration: true

I am not able to fuse absolute pose with visual odometry and IMU. I want to use something like this:

ekf_global.yaml:

frequency: 30
sensor_timeout: 0.1
publish_tf: true
map_frame: map
odom_frame: odom
base_link_frame: camera_link
world_frame: map

odom0: /odom
odom0_config: [true,  true,  true,
               true,  true,  true,
               true,  true,  true,
               true,  true,  true,
               true, true, true]

odom0_differential: false
odom0_relative: true

odom1: /tag_pose

odom1_config: [true,  true,  true,
               true,  true,  true,
               false, false, false,
               false, false, false,
               false, false, false]

odom1_differential: false
odom1_relative: false

imu0: /imu/data
imu0_config: [false, false, false,
              true,  true,  true,
              false, false, false,
              true,  true,  true,
              true,  true,  true]

imu0_differential: true
imu0_relative: false
imu0_remove_gravitational_acceleration: true

odom0 is the output of rtab_map:

frame_id: "odom"
child_frame_id: "camera_link"

imu0 is:

frame_id: "camera_imu_optical_frame"

(Realsense ROS publishes all the transformations between the camera_link and for instance camera_imu_optical_frame.)

What should be the transformation for odom1?

Is it the transformation between the map (frame_id) and the camera_link (so the raw transformation between the tag and the camera)? Is it the transformation between the map (frame_id) and the odom? Since I have another ekf_node that fuses the continuous data (vo and IMU), I can transform the raw tag pose.

The output of the local ekf is clear. It is the transformation between the odom and the camera_link. The odom frame is the initial position of the sensor when launching the nodes.

What is the output of the global ekf (/ekf_global/odometry/filtered)? From the doc it should be the transformation between the map and odom. But from my understanding:

  • map frame is fixed and corresponds in my case to the tag frame
  • odom frame is the initial pose of the sensor and the transformation map / odom should be fixed
  • camera_link is the free floating camera frame

So does the ekf (/ekf_global/odometry/filtered) estimates the map --> odom transformation and I have to use the /ekf_local/odometry/filtered to get the transformation between the tag/map and the camera_link?


Edit:

I did some tests, recapped here.

The homogeneous transformation a_T_b means: X_a = a_T_b . X_b.

The vision library detects the tag and estimates the transformation camlink_T_tag. The goal was to still have the transformation camlink_T_tag when the tag is not detected (occlusion, ...).

Option 1:

  • map and tag frames are the same
  • ekf_local/odometry/filtered estimates odom_T_camlink
  • ekf_global/odometry/filtered estimates map_T_camlink
  • when the tag is detected ...
(more)
edit retag flag offensive close merge delete

1 Answer

Sort by ยป oldest newest most voted
3

answered 2019-09-25 04:29:06 -0600

Tom Moore gravatar image

updated 2019-09-25 04:30:54 -0600

OK, there's a lot to unpack here.

First, please always include a full sample message from every sensor input.

Second, both the map and odom frame are world-fixed frames. If you were to feed the same inputs to both EKF instances, you'd have the exact same output from both filters.

The odom-frame EKF is generating the odom->camera_link transform in your case. The map frame EKF is technically producing the map->camera_link transform. So it's producing the pose of the camera in the map frame, and the odom EKF is giving you the pose of the camera in the odom frame.

The problem is that tf doesn't permit re-parenting of frames, so the map frame EKF has to take its (internal) map->camera_link transform, and multiply it by the inverse of the odom->camera_link transform, which generates the map->odom transform. But all of that should be transparent: all you care about is that you can obtain the pose of the camera in the map and odom frames, and you can carry out transforms along that chain.

As for your tag poses, the only packages I've used all give the pose of the tag in the camera frame. If you want to fuse tag poses, you need some sort of data store that contains the map-frame poses of all the AR tags and serves them up as, e.g., static transforms. That, or you can have a special node that looks up the tag pose in the map frame, and uses the current detection to compute the map-frame pose of the robot, which then goes to the EKF.

How do you get the map-frame poses of the AR tags? You can try recording them during mapping, or you can manually measure their locations and add them to your data store. But there's no way the EKF can just take a relative AR tag pose and somehow work with it. It has to be grounded to something.

EDIT: also, is there any reason you are fusing every single variable for odom0? And I'd ditch both yaw and linear acceleration from the IMU, if I were you. I know you have it in differential mode, but magnetometers are awful.

edit flag offensive delete link more

Comments

Thanks for the answer. I have edited my questions to add the tests I did.

Yes in my case I assume the tag does not move.

For odom0, I don't recall why I did this (I changed the values until having something that kind of work...). I think it should be at least:

odom0_config: [true, true, true, true, true, true, true, true, true, true, true, true, false, false, false]

since I use rtab_map in rgbd_odometry mode.

Noted for the IMU. The RealSense uses the Bosch BMI055 that should have an accelerometer and a gyroscope.

Is it better to use the raw odom0 and imu0 for ekf_global or to use directly the output of ekf_local/odometry/filtered? I have seen some contradictory info.

cv_ros_user gravatar image cv_ros_user  ( 2019-09-25 10:31:34 -0600 )edit

Yes, I think I have understood what you said about the IMU. I use the ROS Madgwick filter btw. Not sure what absolute orientation could be reliably estimated with the accelerometer? Most likely the configuration for the IMU should be the angular velocity and the linear acceleration.

cv_ros_user gravatar image cv_ros_user  ( 2019-09-27 11:18:44 -0600 )edit

Question Tools

3 followers

Stats

Asked: 2019-08-08 10:31:28 -0600

Seen: 1,198 times

Last updated: Sep 25 '19