Howto create semantic maps using ROS and Kinect?
I wish to create a robotic "Seeing Eye Dog" to assist the blind. It will perceive the world using a kinect sensor and will be based on turtlebot or bilibot or something similar. The bot will initially create a
Semantic map of the environment. - Walls, Doors, Floors, Windows, Switchboards, Trees etc.
A database of frequently encountered structures such as furnitures, gadgets, people and so on.
It will probably construct the maps and models using RGBDSLAM... may use the octomap_mapping stack and then do further analysis.
I have gone through the introductory tutorials of ROS and tried out the new rgbdslam on a PC using kinect.
It appears as though ROS is evolving fast especially in the area of Semantic mapping. From the papers by Radu Bogdan Rusu, it appears that such analysis especially in kitchens has been already coded.
I want to avoid reinventing the wheel. What I would like to know is, what are the readymade building blocks(stacks) that would be useful to my project? How to get started? I am novice in ROS so all help is appreciated.
In return I will help ROS by documenting my explorations as a tutorial for "Creating Semantic Maps using ROS and kinect".