ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
0

viso2 (visual odometry from mono_odometry)

asked 2019-04-27 15:35:25 -0600

Dylan gravatar image

updated 2019-04-27 15:43:07 -0600

I'm trying to use the package viso2 to convert the images seen by my monocular camera (it's the bottom camera of an AR.Drone 2.0) into visual odometry but I have some questions:

  • First, I'm using ROS Kinetic and in the wiki says it's for indigo. Does anyone know if it works for kinetic?

  • I need some help completing these parameters:

    1. Bucketing parameters. I don't know what they are or what to put there
    2. Matcher parameters. I don't know what they are or what to write there
    3. camera height and camera pitch. I only know those values when I see a predefined tag, but what can I put there when I don't know them? I just put a 1, for example?
edit retag flag offensive close merge delete

2 Answers

Sort by ยป oldest newest most voted
1

answered 2019-04-27 23:09:43 -0600

janindu gravatar image
  1. visio2_ros has a kinetic branch in github. It supports kinetic.
  2. Bucketing parameters : If you extract features from an image, usually a feature rich environment would produce most of the features and the other parts of the image will be underrepresented. Therefore image is gridded into buckets and at most n number of features from each bucket are extracted. Once you understand this concept, ~max_features, ~bucket_width, ~bucket_height are self explanatory parameters.
  3. Matcher parameters are used to configure the feature matching algorithm. For an example, ~nms_n and ~nms_tau parameters are used in the non-maxima-suppression. It is a good idea to start with default values if you don't have insight into what these parameters mean.
  4. Monocular visual odometry can't solve scale. This is why you need to give the camera height and camera pitch as parameters. You can read the pitch from your IMU and height from your sonar (I believe you have previously asked a question with that setup and I assume you haven't changed the setup). You can dynamically change the parameters using the cpp API. Alternatively, there is a Python API as well.
  5. As for where these parameters need to be set, all static parameters can be set in the launch file using a parameters.yaml. This is a good tutorial. You can set the dynamic parameters using the cpp API as meantioned in [4].
edit flag offensive delete link more

Comments

Thanks :))). One basic question, I see 3 folders: viso2, viso2_ros and libviso2. In the wiki page it says that viso2_ros is a ROS wrapper of the libviso2. I suppose I only have to modify the file that is inside the launch file in the viso2_ros, is that OK? Do I have to install anything?

Dylan gravatar image Dylan  ( 2019-04-28 09:43:21 -0600 )edit

You only have to write your own launch file. However, note what Stephan has said. Visio might not work for you.

janindu gravatar image janindu  ( 2019-04-28 19:42:20 -0600 )edit
0

answered 2019-04-28 02:47:48 -0600

Stephan gravatar image

Viso2 has been developed for forward looking cameras in cars. For mono VO it needs the camera pitch and distance to the ground to estimate the scale (it does a ground plane estimation with those parameters). For downward looking cameras from drones lsd_slam should be better suited. See also http://www.ros.org/news/2014/12/ardro...

edit flag offensive delete link more

Comments

Thanks, but I don't need to build the map. I just need to know the position of a predefined tag (it's on the floor and I recognize it using ar_track_alvar), and what I want to do it see that tag, set the distance between that tag and my quadcopter (that distance is given by ar_track_alvar) and go there. I'm trying to use an EKF because the measurements of the tag were very noisy and the quadcopters moves a lot (just little movements but the errors that I can handle because of the relationship between the FOV and the size of the tag is really big (i.e, they're quite similar)), and I need some smoother movements. But to use the EKF I need the visual odometry, that's why I'm trying to use viso :)

Dylan gravatar image Dylan  ( 2019-04-28 09:42:08 -0600 )edit

Maybe something like SVO would be a better fit then? Viso2's mono odometer is not going to work for your use case as it needs a fixed distance from camera to the ground (see http://wiki.ros.org/viso2_ros#Monocul... ).

Stephan gravatar image Stephan  ( 2019-04-28 16:41:48 -0600 )edit

I read the documentation and it seems to be what I was looking for. Thanks! So, the output of the SVO is the Visual Odometry, and I can use that VO to feed an EKF (also fed by an accelerometer, a gyroscope, a sonar height, etc) to have a better state estimation of my quadcopter?

Dylan gravatar image Dylan  ( 2019-04-29 08:15:07 -0600 )edit

Another question: I read that the output of the SVO does not have a unit, so how can I make that feed the EKF (the idea is to do the EKF using robot_localization)

Dylan gravatar image Dylan  ( 2019-04-29 12:05:15 -0600 )edit

To get better answers to your add-on questions, I think it would be better to open new ones instead of commenting them here.

Stephan gravatar image Stephan  ( 2019-04-29 14:30:41 -0600 )edit

Question Tools

2 followers

Stats

Asked: 2019-04-27 15:35:25 -0600

Seen: 1,513 times

Last updated: Apr 28 '19