Confusion on implementation of multisensor fusion [closed]
Hello all! I am somewhat confused with how to implement multisensor fusion and link a few libraries together. My idea is to fuse together nodes collected from an ordinary webcam, Hokuyo URG laser and odometry from/on a Pioneer P3-AT.
I have read through a few packages - robot_pose_ekf, hector_mapping, gmapping, ROSARIA and viso2_ros and I am confused on how to link the subscribed/published nodes by the packages.
Not only that, for my situation, which would be best to implement - gmapping or hector_mapping?