How does rtabmap fuse wheel odometry with visual odometry?
I set up my robot with turtlebot and kinect and excute rtabmap.
-- http://wiki.ros.org/rtabmap_ros/Tutor...
I hid the image of the Kinect sensor and moved the turtlebot.
And I moved the turtlebot to calculate the travel distance only with the kinect image.
I think the rtabmap algorithm is internally fusing image information and wheel odometry.
How are you fusing?
And do you have your own paper about fusing wheel odometry and viusal odometry?
Thank you ^ ^