ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

The standard approach to solving this problem is:

  • Use the ros_control package to interface with your motor driver code, this is what does the translation from your requested joint positions/velocities/efforts to actual motor movement.
  • To perform sensor integration, you'll need to create a URDF of your robot. There's plenty of tutorials available for that, it's too lengthy to go into that here. ROS will use this when performing various sensor/joint calculations for you.
  • Once you can move the platform appropriately, you'll want to integrate your sensors to provide odometry data. The most common way to do this is via a kalman filter, there are several packages for that kind of thing.
  • With the ability to move the robot and detect where you're moving, you'll need to integrate your obstacle detection method. You've mentioned having the d415, that's a depth camera so in your case you'll probably want to use something like depthimage_to_laserscan to produce something on the /scan topic for the rest of the stack.
  • You should now have a robot that you can drive around, that has a basic understanding of where it is, and that can perceive obstacles. You're now ready to integrate SLAM (the actual mapping/autonomy part). So you'll next need to integrate in the move_base package, which handles connecting all of your pieces together into a path planner.
  • Next, you'll need to be able to generate a map. Your best bet here is gmapping, there are alternatives but this is the most frequently used and so it will have the most helpful documentation for you to make use of.
  • Finally, you'll need to be able to localize. AMCL is probably your best bet here for the same reason as gmapping.

That's the full formula to getting complete SLAM autonomy in ROS, best of luck!