How does turtlebot 3 do localization?
Hey all. I am trying to understand the localization part of Turtlebot 3 Burger. From what I understand from turtlebot3_core, by using an IMU to calculate rotational values from the wheels and translating them into odometry and updating it in the world_map.
Please kindly correct me if I am wrong and aside from odometry, may I know if the LIDAR plays a part in Localization? There are a few localization algorithms like ekf etc. What is the algorithm employed by Turtlebot 3?
Regards, Bobby
Edit: Hi Pujie,
Yes, I do know SLAM. I am trying to dive in deep about ROS for a report. From my understanding, ROS localization works by Adaptive Monte Carlo Localization(AMCL) technique. In layman's terms, just take it that each time I scan, I throw particles around and those particles are fixed and known as poses. Each time I scan, I know my position based on the poses I scanned with laser(kinect, lidar etc)
Odometry is using position sensors within the wheel encoders to determine the position. Using the information obtained from odometry and applying transforms(tf function with the mathematical calculation), it updates the x, y, z position of turtlebot 3 in the map, as shown in rviz.
I am trying to see if I am missing anything out.
Please do not post answers, unless you are answering your own question. To interact with other posters, please use comments (for short responses) or update your original question by editing it. Use the
edit
button/link for that.