ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange |
1 | initial version |
You can also check out ccny_rgbd, which provides tools for fast visual odometry with RGB-D cameras.
2 | No.2 Revision |
You can also check out ccny_rgbd, which provides tools for fast visual odometry with RGB-D cameras.
EDIT as per K_Yousif's question:
I'm not sure if this is also the case with RGBDSLAM, but in our implementation, we've separated the visual odometry from the loop-closing problem. We do provide a mapping interface, which operates on top of the visual odometry and can perform SLAM, but it is not required for the VO.
In terms of the VO, we use a Kalman-Filter based approach which does not require computation of feature descriptors or RANSAC matching. This allows us to use cheap features (such as Lucas-Kanade corners). On an i7 processor, I'm getting a processing time of ~10 to 15ms per frame. Also, our VO has constant time and space requirements (not sure if this is true with RGBDSLAM).
The wiki pages have a bit more information about the pipeline