ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
2

How to use robot_localization with ar_track_alvar?

asked 2015-08-10 08:33:49 -0600

updated 2015-08-11 02:57:00 -0600

I'm looking for a way to use visually detected markers from the ar_track_alvar module as measurements for the ekf_localization_node from the robot_localization module.

It seems like robot_localization has no message type for observed markers. Sure, I can invert an observed marker pose and use the result as pseudo-observed robot pose. But how to come up with realistic covariance information, which is needed for the message of type PoseWithCovarianceStamped? A marker gives me distance and direction (or x, y, z in a local coordinate frame). So my derived pose and orientation will be highly correlated and its covariance matrix singular.

Is that a reasonable approach?
If not, how to combine these nodes the right way?
Are there any resources and tutorials you can suggest? (I googled a lot, but couldn't find one.)

Edit: I'm thinking of implementing an Unscented Transform (UT) for deriving the pose uncertainty from pixel uncertainty. In other contexts I had positive experiences with the UT. What do you think?

edit retag flag offensive close merge delete

Comments

There is an old package where they say they can integrate visual landmarks into the EKF. http://wiki.ros.org/pose_ekf_slam . maybe it could help

Mehdi. gravatar image Mehdi.  ( 2015-08-11 09:12:16 -0600 )edit

2 Answers

Sort by ยป oldest newest most voted
2

answered 2015-08-11 08:55:15 -0600

Tom Moore gravatar image

Both ekf_localization_node and ukf_localization_node assume that all measurements provide either the robot's body frame (base_link) velocity, its world frame (map or odom) pose, or its body frame linear acceleration. Of course, if your data is provided in another coordinate frame, that's fine, so long as a transform exists between the source and target frames. Your approach of using an intermediate node or modifying the existing node to provide the data you need is definitely the way to go.

Your second question is probably a better question for the Robotics Stack Exchange site. What you're asking is how to compute a covariance matrix of the robot's pose from a relative range and bearing measurement to a marker location, and I think the answer is sufficiently theoretical to warrant a question there. Off the top of my head, it feels like the error in the range and bearing from the robot to the marker is the same as the error in the range and bearing from the marker to the robot. If you can transform that range and bearing error to (X, Y, Z) error, then would it not be directly applicable to whatever pose measurement you compute? In any case, you'll have to be careful to compute the covariance in the world (map or odom) frame and not the robot's body (base_link) frame.

Just doing a brief look on Google, it seems like other people have applied the unscented transform to similar problems successfully.

In any case, the approach suggested by @Jim Rothrock will at least get you some result. If you went that route, you'd probably want to inflate the diagonal values a bit so as to over-estimate the error, though I realize you'll lose the correlation between the variables.

edit flag offensive delete link more

Comments

I finally found some time to try the unscented transform: I noticed that it's quite annoying to compute the 17 sigma poses (calibrated camera with 4 observed points -> no direct solution), so I decided to use approximate formulas for the expected uncertainty found in a photogrammetry lecture.

Falko gravatar image Falko  ( 2015-10-26 11:56:59 -0600 )edit
2

answered 2015-08-10 13:21:21 -0600

Jim Rothrock gravatar image

But how to come up with realistic covariance information

Just set the covariance matrix diagonal values to some small value, such as 0.0001, and set the non-diagonal values to 0.0. Alternatively, you can set all the matrix values to 0.0, and ekf_localization_node will automatically set them to a small, but nonzero, value.

edit flag offensive delete link more

Comments

1

I doubt this approach will work well. Although it will probably converge and yield some result, the statistical uncertainty should be determined more precisely. If set to low or to high, you might throw away either odometry or the measured markers. Not to speak of the neglected correlations!

Falko gravatar image Falko  ( 2015-08-11 02:53:03 -0600 )edit

Question Tools

2 followers

Stats

Asked: 2015-08-10 08:33:49 -0600

Seen: 881 times

Last updated: Aug 11 '15