ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
0

Next Step After Hector-SLAM?

asked 2019-10-28 06:46:31 -0600

cyotmhnd gravatar image

Hello Everyone,

Im currently working on a project and things are getting complicated and im looking for the best approach and honestly easily applicable result judging by the hardware I have.

Now lets take it from the start, my project is simply a robot that we ask to go to a specific point on the map and it goes there. The Hardware I have in my disposal:

-Nvidia Jetson , DC Motors , DC Geared Motors with Encoders , RPLIDAR A1 , Adafruit 9-DOF Absolute Orientation IMU Fusion Breakout BNO055 , H-Bridge L298N , Arduino Due , Arduino Uno

full disclaimer: I have no Idea how to use the IMU and DC Geared Motors with Encoders. But I have them

Where I am right now:

Right now I have managed to do mapping using Hector Slam using Rplidar the reason why im using it because it doesnt require odometry.

The problem:

I have no Idea where to go from here. I've looked into the Navigation stack but couldn't find a good tutorial that explains using it with Hector Slam, and if there is it doesnt explain much about how to send details to the motors and how to even control the motors. From what I understood the Navigation stacks takes the map and continuously takes data from the laser scanner (rplidar) as well as odometry data and spits out x, y, and theta. So the idea that I have and trying to apply is to ignore sending the odometry data and just depend on the laser scanner data in order to generate the x, y, and theta. take this data send it to the Arduino, where I can convert them to something my driver can understand and control the DC motors I have learned how to write Arduino listeners thats why im using im not sure if there is a better way.

My questions:

What do you think of this approach can it work and how can I make it work?

Am I missing anything ?

Whats a better approach with the hardware in my disposal and how can I achieve it ?

Whats a better approach in general even if it requires getting more hardware ?

Im a complete newbie in ROS and really lost because from what I have seen many guides just focus heavily on software and dont really go deep in hardware interface as most of them just use pre built robots (Kabuki and whatever)

I welcome any suggestions and answers as I really need to finish this project.

Thank you All

edit retag flag offensive close merge delete

1 Answer

Sort by ยป oldest newest most voted
1

answered 2019-10-28 11:47:45 -0600

The navigation stack assumes you have a working compliant robot. It will not produce odometry or positioning for you. It is expected you have a robot base with a valid map->odom->base_link transformation tree which you must provide, and a topic to publish a command velocity for the wheels to achieve. That's all part of the "you must provide this" before we can talk about navigation.

It sounds like you need to learn some of these mechanical basics before you're ready to start with Navigation. You need to use your wheel encoders and IMU to provide odometry for the robot base, usually over an /odom topic and TF with a odom->base_link transformation. Then hector slam or a localizer will provide the map->odom transform. Using robot_localization will help fuse your IMU and encoder information into a better state estimate (and the lidar odometry from Hector if you choose, or in place of the encoders).

edit flag offensive delete link more

Comments

Okay so for mapping as I've mentioned it was easy to do because I found a step by step tutorial explaining using it without odometry. And, i found some posts about using navigation stack without odom but no one really has good documentation thats why im stuck rn. Now from what ur saying im guessing either way this method isnt really recommended. Therefore, I should use the IMU and encoders that I have. Now I've looked around and broken it down to three parts. 1st part is I need to get the encoder data and publish it on /odom. 2nd part is get the IMU data and publish it on /odom. Third, Infuse them together and make one super accurate /odom. and after that I can use the navigation stack ?

cyotmhnd gravatar image cyotmhnd  ( 2019-10-29 06:53:48 -0600 )edit

You don't need any particular sensor to produce odometry, but if you have the encoders and IMU, you should use them, its not sensible to leave "extra information on the table". Odometry is just anything that gives you information on relative motion in the local time horizon. You just need the odom->base transformation, you don't even need the topic if you don't like.

On your note, that's a reasonable plan, but publish the messages on different topics. /odom, /imu, /fused_odom or something

stevemacenski gravatar image stevemacenski  ( 2019-11-02 20:06:48 -0600 )edit

@cyotmhnd I am currently struggling with this too.. Were you able to achieve navigation? If yes, how did you go about it?

femitof gravatar image femitof  ( 2020-04-05 22:10:32 -0600 )edit

Question Tools

1 follower

Stats

Asked: 2019-10-28 06:46:31 -0600

Seen: 1,164 times

Last updated: Oct 28 '19