ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
2

Are odom, pose, twist and imu processed differently in robot_localization?

asked 2015-08-25 07:27:58 -0600

updated 2015-10-16 07:46:31 -0600

I wonder, why robot_localization distinguishes between the four types of measurements odom, pose, twist and imu. Is it only for convenience, because these are four common message types in ROS? Or are they treated differently?

I quickly created an overview of the different message types with the data they're containing:

                                     Linear acceleration ───────────────┐
                                     Angular velocity ─────────────┐    │
                                     Linear velocity ─────────┐    │    │
                                     Orientation ────────┐    │    │    │
                                     Position ──────┐    │    │    │    │
                                                    │    │    │    │    │
    ────────────────────────────────────────────────┼────┼────┼────┼────┼──
                                                    │    │    │    │    │
    Odometry                                        │    │    │    │    │
                                                    │    │    │    │    │
        std_msgs/Header header                                         
        string child_frame_id                                          
        geometry_msgs/PoseWithCovariance pose       ×    ×             
        geometry_msgs/TwistWithCovariance twist               ×    ×   

    ────────────────────────────────────────────────┼────┼────┼────┼────┼──
                                                    │    │    │    │    │
    PoseWithCovarianceStamped                       │    │    │    │    │
                                                    │    │    │    │    │                                                                         
        std_msgs/Header header                                         
        geometry_msgs/PoseWithCovariance pose       ×    ×             

    ────────────────────────────────────────────────┼────┼────┼────┼────┼──
                                                    │    │    │    │    │
    TwistWithCovarianceStamped                      │    │    │    │    │
                                                    │    │    │    │    │                                                                         
        std_msgs/Header header
        geometry_msgs/TwistWithCovariance twist               ×    ×   

    ────────────────────────────────────────────────┼────┼────┼────┼────┼──
                                                    │    │    │    │    │
    Imu                                             │    │    │    │    │
                                                    │    │    │    │    │                                                                         
        std_msgs/Header header                                         
        geometry_msgs/Quaternion orientation             ×             
        float64[9] orientation_covariance                              
        geometry_msgs/Vector3 angular_velocity                     ×   
        float64[9] angular_velocity_covariance                         
        geometry_msgs/Vector3 linear_acceleration                       ×
        float64[9] linear_acceleration_covariance                      

    ────────────────────────────────────────────────┴────┴────┴────┴────┴──

If I understand correctly, I could integrate orientation measurements from, e.g., a compass as odometry (orientation part of the Pose), as IMU data (orientation) or as pose (orientation part). In the launch file I select the yaw and all other measurements will be ignored anyway.

Would the result be equal, or is there anything I overlooked?

edit retag flag offensive close merge delete

1 Answer

Sort by » oldest newest most voted
4

answered 2015-08-29 09:57:24 -0600

Tom Moore gravatar image

Yes, the distinction is purely for convenience and compatibility with other nodes. If someone wants to use an existing IMU driver, that driver will almost certainly output a sensor_msgs/Imu message. Similarly, if they want to use an existing robot controller, that node will likely produce a nav_msgs/Odometry message. As an odometry message is simply a geometry_msgs/PoseWithCovarianceStamped combined with a geometry_msgs/TwistWithCovarianceStamped, it made sense to support those as well. The fact that we have the same settings for each sensor (e.g., odom0_config has the same form as twist0_config), even if that sensor doesn't measure every variable, is really just to eliminate confusion.

Internally, an odometry message is simply split into its constituent pose and twist messages, and then passed to the same preprocessing functions as the pose and twist messages would be if they had come from an external source. IMU messages get the same treatment, though there is a small amount of additional preprocessing that occurs with them, and they also contain acceleration data, which has its own preprocessing method.

Your supposition is largely true: you can place data in more or less any message you want, though I'd be careful with IMU data. Ordinarily, all pose data gets transformed into the value of your world_frame parameter (map or odom), and all velocity data gets transformed into the frame specified by the base_link_frame parameter. The transformed measurements are then fused with the state estimate. IMUs get special treatment. When a user mounts an IMU onto the robot, they typically specify the IMU's position as a transform w.r.t. the base_link frame (e.g., a static transform from base_link to imu). For IMU data, we need to use that transform to transform the orientation data, which means we are not going to transform it into the world_frame, but rather into base_link.

In any case, if you have data from an IMU, I would suggest feeding it to the filter as an IMU message. If you have pose or twist data, you can use whatever message type suits you.

edit flag offensive delete link more

Comments

You're saying odom = pose + twist. But pose and twist don't include covariance information, right? Or can I pass PoseWidthCovariance as "pose" as well? As far as I understand, a pose with covariance needs to go into "odom".

Falko gravatar image Falko  ( 2015-10-16 06:08:23 -0600 )edit

Oh, thanks! Looks like I mixed up the message types. I fixed the overview in my question accordingly.

Falko gravatar image Falko  ( 2015-10-16 07:47:03 -0600 )edit

Question Tools

2 followers

Stats

Asked: 2015-08-25 07:27:58 -0600

Seen: 4,442 times

Last updated: Oct 16 '15