Navigation planning based on kinect data in 2.5D?
I have a wheeled robot with a front mounted SICK laser scanner.
The recent addition of a kinect allows me to use the SICK laser for longer range nav planning and mapping, and the kinect for various 3d stuff.
So, my idea is to convert the kinect point cloud to laser scans at various heights (via pcl). Say for instance my SICK laser is at 8" above the ground. That will not tell me about a curb or some other obstacle that lies just under 8". So, if I were to map the appropriate Z value of the kinect's point cloud data to a new laser scan topic, I could then use it for navigation, and write some code to decide what to do at that Z level. A simple example would be to determine the height of an object that my robot could negotiate based on wheel size, and just slow it down to the appropriate speed. It could also check for height clearance when driving around by creating a laser scan that correlates to the highest Z value of the robot.
I see this being useful for quad copters... it is still not full 3D navigation, but it could allow for some decent object avoidance in the Z dimension by writing some code to determine which Z height has the most clear path.
My question is, is anyone using laser scans at various heights to evaluate navigation at different Z levels? Is the kinect2laser a viable solution, or is there a better way to do this?
I see this as a possible workaround for this problem.