ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

Revision history [back]

Hi Vinh,

People have definitely done this. The "best" approach will be dependant on many factors, including (perhaps most importantly) the quality of the wireless links between the robots, as well as the amount of data that needs to be sent around, and so on.

If the robots will always have really good WiFi (i.e., they are always very close to an access point), you could try just running everything "out of the box" and just pushing each robot into its own namespace on the same master that is running either on one of the robots or on a separate machine (say, a machine that connects via wired Ethernet to the wired network connecting your wireless access point(s).

However, if the robots will sometimes have intermittent network connectivity, things get much more complicated. ROS1 "out of the box" does not handle this situation gracefully; the TCP buffers will back up, and then things get ugly. You can help some of this by running all your ROS1 connections over a VPN, which (surprisingly) can behave much better than "normal" TCP.

Alternatively, you could use an additional middleware, such as OpenSplice DDS, to handle the inter-robot links, and run a separate ROS master on each robot. (Note that ROS2 is being built on DDS for exactly this reason, among others, but it's not yet ready for day-to-day use.)

Cheers,

-mq

Hi Vinh,

People have definitely done this. The "best" approach will be dependant on many factors, including (perhaps most importantly) the quality of the wireless links between the robots, as well as the amount of data that needs to be sent around, and so on.

If the robots will always have really good WiFi (i.e., they are always very close to an access point), you could try just running everything "out of the box" and just pushing each robot into its own namespace on the same master that is running either on one of the robots or on a separate machine (say, a machine that connects via wired Ethernet to the wired network connecting your wireless access point(s).

However, if the robots will sometimes have intermittent network connectivity, things get much more complicated. ROS1 "out of the box" does not handle this situation gracefully; the TCP buffers will back up, and then things get ugly. You can help some of this by running all your ROS1 connections over a VPN, which (surprisingly) can behave much better than "normal" TCP.

Alternatively, you could use an additional middleware, such as OpenSplice DDS, to handle the inter-robot links, and run a separate ROS master on each robot. (Note that ROS2 is being built on DDS for exactly this reason, among others, but it's not yet ready for day-to-day use.)

More specifically for the multiple-robots-on-wireless-LAN approach (the simplest):

Let's assume there is a computer that seems like a good candidate to run the ROS master (say, a workstation that will also run the RViz or other "system-wide" nodes). Let's pretend that computer is called overlord.local on the network and can be reached from all of the robots. When debugging this type of thing, it's important to ensure that, first, the LAN and mDNS is properly configured, so that when you run ping overlord.local on your robot laptops, it works.

Let's assume further that overlord.local can ssh into all of the laptops on the Turtlebots. You can make this work password-less using ssh-keygen to create a public/private keypair on overlord.local, and then ssh-copy-id to copy the private key to all of your robot laptops.

To run some robots, you'll start roscore on the overlord.local machine, and then shell into the robots, say turtle1.local and turtle2.local. The key thing to do once you've shelled into a robot is to point its ROS_MASTER_URI environment variable back to overlord.local. More specifically, from a shell (or in a .bashrc alias):

export ROS_MASTER_URI=http://overlord.local:11311

then, you'll need to have each turtlebot launch inside its own namespace; otherwise, the node names on turtle1 will be the same as on turtle2, and the ROS master will tell the previously-launched node to shut down when the new one starts. There are a few ways to set the namespace, but one fairly easy one is to set ROS_NAMESPACE to something unique, like:

export ROS_NAMESPACE=$(hostname)

Then, you can roslaunch the turtlebots as you've been doing, and they will appear in their own unique namespace on the ROS master running on overlord.local

Cheers,

-mq