ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
1

Pointers towards using robot_localization for VO + Wheel odometry

asked 2018-06-20 07:40:44 -0600

updated 2018-06-20 21:51:10 -0600

Hey , i am trying to setup a EKF fusion for combining odometry coming from two topics

  1. Visual Odometry (X , Y and yaw)
  2. Wheel Encoders ( X , Y and yaw)

The main source of Odometry is coming from wheel encoders , the visual odometry is there much like a correction mechanism like while using IMUs. Having said that , my VO node publishes Twist values , is it better to fuse the wheel odometry using Twist values using robot_localization ?

I am getting the following error when i launch rviz to view the odometry

Assertion `!pos.isNaN() && "Invalid vector supplied as parameter"' failed.

I am confused with how the TF should be setup . The TF of wheel encoder odometry is as follows

 frame_id: "odom"
 child_frame_id: "base_link"

TF of the visual Odometry is as follows

frame_id: "raspicam"
child_frame_id: ''

AFTER LAUNCHING THE EKF NODE

image description

BEFORE LAUNCHING THE EKF NODE

image description

* MY EKF LAUNCH FILE LOOKS LIKE THIS *

<node pkg="robot_localization" type="ekf_localization_node" name="ekf_localization_custom" clear_params="true"> <!-- renamed to ekf_localization_custom to prevent name conflicts -->

  <param name="frequency" value="30"/> <!-- this value should be fine -->

  <param name="sensor_timeout" value="0.1"/> <!-- this value should be fine -->

  <param name="two_d_mode" value="false"/> <!-- could make this true to stick to XY plane.. -->

  <!-- based these values off of the ROScon presentation -->
  <param name="map_frame" value="map"/>
  <param name="odom_frame" value="odom"/>
  <param name="base_link_frame" value="base_link"/>
  <param name="world_frame" value="odom"/>

  <param name="odom0" value="/optical_flow"/>
  <param name="odom1" value="/ubiquity_velocity_controller/odom"/>

  <!-- settings for using Twist -->
  <rosparam param="odom0_config">[false, false,  false,
                                  false, false, false,
                                  true, true, false,
                                  false, false, true,
                                  false, false, false]</rosparam>


 <rosparam param="odom1_config">[false, false, false,  
                              false, false, true, 
                              true, true, false,   
                              false, false, true,
                              false, false, false]</rosparam>

  <param name="odom0_differential" value="false"/>
  <param name="odom1_differential" value="false"/>
  <param name="publish_null_when_lost" value="false"/>
  <param name="odom0_relative" value="false"/>
  <param name="odom1_relative" value="false"/>

  <param name="odom0_queue_size" value="10"/>
<remap from="/odometry/filtered" to="/odometry/visual" />
</node>

[EDIT #1 adding sensor data samples ]

Wheel Odometry

 header: 
  seq: 3786
  stamp: 
    secs: 1529497452
    nsecs: 845127379
  frame_id: "odom"
child_frame_id: "base_link"
pose: 
  pose: 
    position: 
      x: 0.50
      y: 0.20
      z: 0.0
    orientation: 
      x: 0.0
      y: 0.0
      z: 0.01
      w: 1.0
  covariance: [0.2, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.2, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.2, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.2, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.2, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.2]
twist: 
  twist: 
    linear: 
      x: 0.8334
      y: 0.0
      z: 0.0
    angular: 
      x: 0.0
      y: 0.0
      z: 0.1233
  covariance: [0.2, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.2, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.2, 0.0, 0.0, 0.0, 0.0, 0.0, 0 ...
(more)
edit retag flag offensive close merge delete

Comments

Please post sample messages for every sensor input.

Tom Moore gravatar image Tom Moore  ( 2018-06-20 07:50:59 -0600 )edit

Hey , I have edited the question and added the sample data . I think I have resolved the issue of rviz crashing , it was because the TF not set properly .

Now ,The covariance on the fussed output keeps building up into a larger and larger value. How do I systematically tackle this ?

chrissunny94 gravatar image chrissunny94  ( 2018-06-20 09:33:42 -0600 )edit

1 Answer

Sort by » oldest newest most voted
2

answered 2018-06-26 05:00:47 -0600

Tom Moore gravatar image

updated 2018-06-26 07:15:40 -0600

You have a few issues.

  1. You are fusing velocities from your visual odometry source, but your visual odometry source has no child_frame_id. If you look at the definition of nav_msgs/Odometry, you'll see that the child_frame_id is the TF frame of the twist (velocity) data, but you are leaving this empty. That means the EKF is going to ignore it completely.

  2. You have two_d_mode set to false, which means you want a full 3D state estimate, but you are only fusing 2D variables. Unmeasured variables in the EKF will result in absolute explosions of those variables' covariance values, which will have irritating effects on other variables. Turn on two_d_mode or start measuring the other 3D variables.

  3. Your state estimate covariance is growing without bound for your X and Y position, because you are only fusing velocity data for those. The filter will integrate those velocities, but it will also integrate their errors. If you had, for example, a laser scanner, you could localize against the map to provide X/Y position that would cap the growth in covariance. You are fusing two data sources that accumulate error over time, though, so endless growth in your covariance is actually appropriate. You should really fuse yaw velocity from your wheel encoders, and not absolute yaw.

    Note, though, that if you fuse absolute X, Y, and yaw from your wheel odometry, AND your wheel odometry covariance is NOT calculated correctly (e.g., it's static, as opposed to growing without bound), that will cap the covariance as well, but it will be incorrect.

    In general, yes, I recommend fusing velocity data for odometry sources, including visual odometry. If you had some means of localizing the robot globally (e.g., GPS or scan-to-map localization or overhead camera localization), you would fuse those absolute pose values.

EDIT in response to comments:

I'm not sure we're on the same page with what an IMU does or how it constrains drift (or doesn't). An IMU typically produces the following quantities:

  1. Absolute roll, pitch, and yaw
  2. Angular velocity
  3. Linear acceleration

If you fuse roll, pitch, and yaw into the EKF, that data is coming from accelerometers and magnetometers, which are measuring the IMU's orientation with respect to a fixed frame. So if you fuse those values, you will constrain drift only for those variables, and their covariances will reach a steady-state and will stop growing.

If you fuse velocity or linear acceleration data from an IMU, but do NOT fuse the roll, pitch, or yaw data, your roll, pitch, and yaw covariance will grow without bound.

In this EKF, we are concerned with the following quantities:

  1. X, Y, Z position - can be measured directly, or integrated from linear velocity data.
  2. Roll, pitch, yaw - can be measured directly, or integrated from angular velocity data
  3. X, Y, Z velocity - can be measured directly, or integrated from linear acceleration data
  4. Roll, pitch, and yaw velocity - can only be measured directly
  5. X, Y, Z acceleration - can only ...
(more)
edit flag offensive delete link more

Comments

https://avisingh599.github.io/vision/...

I am taking this approach to build my visual odometry node .

But the drift is just too much . What if i use the Visual Node as simply an error correction node much like a IMU enabled EKF localization node .

Are there other similar work?

chrissunny94 gravatar image chrissunny94  ( 2018-06-26 06:04:40 -0600 )edit

My greatest constraint is Computational power .

I have to run everything on a RPI3B+ .

I have already tried Viso2 , ORB-SLAM2 ,RPG-SVO .(Too Slow!)

recently came across Rebvo , yet to test it .

chrissunny94 gravatar image chrissunny94  ( 2018-06-26 06:10:13 -0600 )edit

Even if you did that, the error you're correcting is error in velocity measurement, not pose measurement. Odometry accumulates error over time, and you covariance will reflect that. The only way to constrain drift in the EKF output is to have an absolute pose measurement.

Tom Moore gravatar image Tom Moore  ( 2018-06-26 06:28:24 -0600 )edit

While using an IMU , we are not getting an absolute pose , right?

So can a visual odometry node be used in a similar way ?

chrissunny94 gravatar image chrissunny94  ( 2018-06-26 07:00:34 -0600 )edit

okay , now i understand better . One last question , is there a way to estimate Angular Velocity and Linear Acceleration ?

recoverPose returns , R – Recovered relative t – Recoverd relative transl

chrissunny94 gravatar image chrissunny94  ( 2018-06-26 07:33:30 -0600 )edit

"Another problem occurs when the camera performs just pure rotation: even if there are enough features, the linear system to calculate the F matrix degenerates. "

What if i use wheel odometry as an absolute scale ?

chrissunny94 gravatar image chrissunny94  ( 2018-06-26 08:03:32 -0600 )edit

Your wheel odometry is generated by integrating encoder ticks. It has the same issue. If it's written correctly, the pose variables will have covariances that grow without bound.

Is it a problem for your covariances to grow? Or are you trying to reject bad measurements from your VO?

Tom Moore gravatar image Tom Moore  ( 2018-06-26 08:27:53 -0600 )edit

Question Tools

2 followers

Stats

Asked: 2018-06-20 07:40:44 -0600

Seen: 1,500 times

Last updated: Jun 26 '18