SLAM Algorithm to combine IMU measurements with 2D image or Point Cloud?

asked 2017-03-31 10:47:25 -0600

mindthomas gravatar image

Hi all.

We are a research group at Aalborg University currently investigating the use of SLAM for indoor positioning of drones. We have decided to use ROS to test out different implementations and develop our own algorithm+controller.

The short version of our question is:

Which SLAM algorithm do you recommend for localization using a 2D camera or point cloud data from a depth sensor (RealSense) while also including other sensor measurements (IMU + GPS)


The detailed explanation and reason for this question can be seen below. In our search for previous work with SLAM, where the primary goal is real-time localization (pose estimation), we have not been able to find that much. For real-time localization a lot of work has been put into EKF-SLAM and FastSLAM, but mainly focused on 2D/planar navigation using LiDAR sensors Otherwise it seems like a lot of research, especially when using other sensors such as camera and RGBD sensors, is put into the mapping portion of SLAM.

In our case we want to focus on the pose estimation and want to have as realiable and robust a position as possible. We would also like to include other sensor information such as attitude estimates (roll, pitch, yaw) from the drone flight controller into the SLAM problem to enhance the positioning. Furthermore we also have an indoor positioning system capable of delivering a position measurement with a slow rate (~1 Hz) but with quite noisy measurement and with a sensor that is likely to dropout in certain areas of the building. Hence we would like to incorporate these measurements as well but not only rely on these why we are investigating SLAM in the first place.

Can any of you suggest any previous SLAM work or suggest any usefull paths for us to investigate?

At the moment our plan is defined as follows:

  • Extend FastSLAM to support features in 3D space and estimate 6D pose
  • Use either a 2D camera or the point cloud output of a 3D depth sensor (RealSense, similar to Kinect).
  • Investigate how other sensor information can be incorporated into FastSLAM

But is this a reasonable plan at all?


Best regards,

Thomas Jespersen

edit retag flag offensive close merge delete