Problem simulating kinect in gazebo
Hi,
I am trying to simulate the turtlebot in gazebo. I am using the turtlebot_gazebo and stack for that (using robot.launch).
However, the depth data coming from the (simulated) kinect is all wrong. Every point in the point cloud published by the /camera/depth/points topic has a depth of 1.0. This is when the light source in the .world file is of the directional kind. When I use a point or spot light source, the bottom half of the pointcloud has depth of 1.0 and the top half either NaN or 1.0 (depending on whether something is within range or not). This is preventing me to use any SLAM algorithm in gazebo. I get the camera rgb image through correctly.
I am using Ubuntu 11.04 and electric.
Thanks a lot.