Solutions for visually aided 2D/3D SLAM for UAV with ROS
I've been wondering what possibilities are there to do visually aided 2D/3D SLAM for UAV in indoor settings with ROS. I've got an IMU talking to ROS, a Kinect sensor and two UVC cameras. What I originally thought would work is feeding robot_pose_ekf
with vslam_system
's visual odometry (/vo) and IMU data, but I'm having trouble (described here) obtaining /vo. So I'm wondering what other possibilities do I have.
My general goal is very similar to Rainer Hessmer's, but with a cheap DIY quadrotor platform. On the beginning I'd use sonar based altitude hold and assume no pitch/roll for this to hopefully work. Similar projects I'm so far aware of are:
- As mentioned above, assuming I can make
vslam_system
work - A great solution I can only dream of copying is MIT's SLAM with Kinect on a Quadrotor, their code is not yet and will not shortly be released
- Patrick Bouffard's Quadrotor Altitude Control and Obstacle Avoidance, similar, could maybe be integrated with
gmapping
usingpointcloud_to_laser
to provide SLAM - As suggested here I could maybe use rgbdslam. I'd need to extract 3D odometry estimates from
rgbdslam
and fuse them with IMU's readings to eliminate drift. Would this be possible?
I'm open for all suggestions on feasibility of the above and beyond. The main problem I'm having is obtaining visual odometry with Kinect or stereo cameras. Any help appreciated. I'd be more than happy to open and describe my solution once I get it working.