robot_localization: enhance local positioning with IMU
I'm testing sensor fusion with robot_localization package.
I've got a small wheeled robot with an xSense IMU and a quasi-UWB local positioning system.
The local positioning consists of two mounted ultrasonic beacons, each is separately localized against several static beacons spread around the room. The system is rather precise: manufacturer reports 2 cm standard deviation, and it's believable. But the update rate is low (around 1 Hz), and the beacons report only position, no orientation.
As the robot moves on the flat floor, I use relative position of two beacons to calculate robot position and 2D yaw (dumb middle point for position, dumb relative vector for orientation). The result is accurate, except for the position jumps between updates, and remarkable yaw jitter.
I try to use IMU data and robot_localization to provide smooth pose estimation between updates. As for now, my setup is as follows:
from the beacons' data I construct a PoseWithCovarianceStamped message. I do not have reported covariances, so I construct covariance matrix simply setting diagonal elements for X,Y,Z, and Z rotation as squared standard deviation (e.g. if x deviation of beacons is 0.02 m, I set covariance[0] to 0.0004). Frame_id is set to ‘map’ in the message.
The IMU sends out standard sensor_msgs/imu message, the only issue is that I have to remove the first '/' from its frame_id, as the ekf node sends out warnings. I've got a separate correction node for this. Also a static transform from base_link to IMU data frame_id is published.
ekf node 1 with world_frame set to map, inputs are IMU data and pose calculated from beacons
ekf node 2 with world_frame set to odom, IMU as the only input
constructed pose and output of odometry/filtered are visualized by rviz
Everything is up and running, but the result is far from desired. Filtered odometry follows the constructed pose closely: after each position jump (=update) the filtered pose drifts slowly towards new position for half a second, then jumps to new position altogether.
Questions:
1) Could anyone hint me about what parameters I should try to tweak to achieve smooth continuous movement of filtered pose?
2) Is there a ROS functionality to calculate robot pose from position of two mounted beacons? Say, I write static tf transforms from base_link to the two beacons' hardpoints, and ROS calculates the robot position/orientation for me, possibly with some cool Kalman magic? Then, of course, feeds it into the EKF node.
Below are the YAML parameter files for my EKF nodes. Note that I use 2d mode for both, but turning it off doesn't change the situation much.
IMU-only node:
frequency: 100
sensor_timeout: 0.1
two_d_mode: true
transform_time_offset: 0.0
transform_timeout: 0.0
print_diagnostics: true
debug: false
debug_out_file: /path/to/debug/file.txt
publish_tf: true
publish_acceleration: false
map_frame: map # Defaults to "map" if unspecified
odom_frame: odom # Defaults to "odom" if unspecified
base_link_frame: base_link # Defaults to "base_link" if unspecified
world_frame: odom # Defaults to the ...
This looks like a great project, but I am confused about what message type your beacon node gives? Is it a PoseWithCovarianceStampedmessage type? Also, is the initial estimate covariance for the imu or the uwb beacon position?