ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
1

How to do Localization and Navigation in 3D using octomap from RGBD-SLAM? [closed]

asked 2012-08-10 02:03:43 -0600

Sudhan gravatar image

updated 2014-01-28 17:13:18 -0600

ngrennan gravatar image

I am using ROS- electric. Now, I am able to built a octomap with the help of RGBDSLAM package. The map is in format '.bt'. Is there any packages available for doing localization and navigation in 3D using this octomap?

Note:localization should be done without using laser.

edit retag flag offensive reopen merge delete

Closed for the following reason the question is answered, right answer was accepted by Icehawk101
close date 2016-11-22 14:58:08.340777

1 Answer

Sort by ยป oldest newest most voted
3

answered 2012-08-10 02:20:32 -0600

AHornung gravatar image

updated 2012-09-25 03:50:32 -0600

We have a 6D localization running in an OctoMap for our humanoid robots. You can find details in the publication "Humanoid Robot Localization in Complex Indoor Environments". It's runnning MCL (particle filtering) and uses ray casting for the sensor model.

The code is now published at http://ros.org/wiki/humanoid_localization

It's mostly designated for humanoid robots and still being polished right now, but I'm sure you can use much of the sensor model code as example.

If you want to implement your own localization, you can use the function castRay in OctoMap.

In terms of map building, planning, and collision checking in an OctoMap look at the 3d_navigation stack and the publication "Navigation in Three-Dimensional Cluttered Environments for Mobile Manipulation".

edit flag offensive delete link more

Comments

yes, I am interested and you can publish the codes. Is it possible to ray casting with kinect or LASER is necessary?

Sudhan gravatar image Sudhan  ( 2012-08-10 02:46:26 -0600 )edit

OctoMap doesn't care about your sensor. You can raycast with any distance sensor, all it needs is a direction and an origin. The sensor model may be a little harder to tweak though.

AHornung gravatar image AHornung  ( 2012-08-10 02:49:41 -0600 )edit

I have a kinect but it can't measure the objects lesser than 1.2m. Is there any solutions for that? Also, I saw one more package(3d_navigation) just now which is maintained by you. Both are same?

Sudhan gravatar image Sudhan  ( 2012-08-10 02:57:01 -0600 )edit

The minimum range should be around 0.7m, but that is the general problem with such a sensor. I added 3d_navigation to my answer, they serve different purposes. You will have to read the papers for the details.

AHornung gravatar image AHornung  ( 2012-08-10 05:17:41 -0600 )edit
1

The code is now public, I edited my answer. It's still being polished and finalized for point clouds, but it may give you some idea.

AHornung gravatar image AHornung  ( 2012-09-25 04:30:17 -0600 )edit

Thank you very much

Sudhan gravatar image Sudhan  ( 2012-09-26 21:34:12 -0600 )edit

If your question is answered, please mark the answer as correct (checkmark).

AHornung gravatar image AHornung  ( 2012-09-28 01:25:21 -0600 )edit

I marked this as a correct answer long back, but for some reasons the checkmark is not working in my browser properly.

Sudhan gravatar image Sudhan  ( 2012-10-10 23:41:39 -0600 )edit

Question Tools

1 follower

Stats

Asked: 2012-08-10 02:03:43 -0600

Seen: 9,708 times

Last updated: Sep 25 '12