ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
1

How do ROS2 developers work with instrumentation-heavy projects?

asked 2022-07-19 20:09:26 -0600

anonymous user

Anonymous

How do experienced developers handle projects where a single robot is actuator-minimal but instrumentation-heavy, using ROS2? I have begun building some "precursors" for a robotic submarine I have been working on for some time. In the process, I have discovered just how egregiously I underestimated the work required to simply get the data in and out of the computer. For now, I am trying to use an Arduino MKR 1010 WiFi I had laying around, and a Raspberry Pi 4 8GB model, with ROS2 Humble on Ubuntu 22.04.

My precursor is a float, of sorts, so I can get the hang of data logging and remote monitoring. USB Camera, USB GPS, and I2C-based IMU. That was all. But I had never worked with ROS before, so you can certainly realize before reading any further that I ran into some serious problems.

First, my discovery that my Fedora 36 personal workstation and Pi OS-based Pi would not install ROS Humble by any means I attempted. So, I installed Ubuntu on the Pi and ate the sluggish user interface (since Ubuntu 22.04 lacks hardware acceleration for the board, apparently). Now, I got the camera publishing frames, and moved onto the IMU. The plan was to use ROSSerial to receive IMU data from the Arduino. Experienced developers will realize why that failed: ROSSerial is not compatible with ROS2.

As you can imagine, I have begun re-considering my approach to the internal electronics. PixHawk for I/O, with MAVROS? Pi Pico, running MicroROS? Controller Area Network (CAN) hats for the Pi and Arduino? Connecting the IMU via I2C and writing a custom node?

My concern is scalability. See, a big part of this project is that I want to expand my skills in the area. But I have other, bigger, plans for this AUV. So, I want to do this right. That's why I'm not going for a monolithic Python program, and just connecting this sensor to the Pi's I2C pins. It's because I would probably need to re-build from the ground-up eventually. Down the line, the vehicle could become very instrumentation-heavy. The most conservative estimate for that involves: 4x cameras, 6x thrusters, 16x analog sensors, 3 microphones, 3x UART-based sonar, 4x IMUs, GPS. From what I can gather of most Bots and Drones, and tutorials therein that involve ROS, this is a fairly radical amount of I/O.

My original thought was, eventually, running the Jetson (or 4x Pi boards) and a series of USB-connected Arduino boards to handle the I/O. I had also considered a CAN bus and borrowing peripherals normally used for drones. Another thought was just a string of USB-to-whatever adapters on a USB 3 splitter. Even using PixHawk flight controllers and adding drivers to receive the I/O and send/receive over MAVLink/MAVROS. But, as mentioned, I cannot seem to find anyone working on anything nearly this I/O dense.

How do experienced developers do this?

edit retag flag offensive close merge delete

Comments

For what it's worth, I've got a robot that I was planning to use a PX4 based flight controller board on. But getting that working well with my jetson nano has been quite a pain. On another project, I'm using a teensy 4.1 with micro-ros to drive 5 steppers, 1 servo, and 6 encoders, and that's been working really well. (hardest part there is lack of an OCD on the teensy.)

Nice thing about micro-ros is that you're using rclc on the micro-controller, and you just need to run the agent node on the host computer. (laptop, pi, etc.) No need to deal with mavlink, custom de/serializatoin, etc. It's all just normal ros2 messages.

Nice thing though about the teensy 4.1 is that it's got a 600Mhz clock.. The pi pico is only 133Mhz and the arduino Uno is 16Mhz ...(more)

ChuiV gravatar image ChuiV  ( 2022-08-09 19:52:02 -0600 )edit

1 Answer

Sort by ยป oldest newest most voted
3

answered 2022-07-22 17:51:46 -0600

Calder gravatar image

In my experience, CAN (particularly when used via socketcan in Linux) is the most simple and reliable way to connect a number of sensors and actuators to a single computer. You likely only need a single CAN bus to handle that number of low-bandwidth sensors (low-frequency analog i/o, sonar rangefinders, even IMUs), and actuators. The value of socketcan is that even with a single CAN bus, you can still use multiple ROS driver nodes - you don't have to put all of your drivers together like you would with a shared serial interface, for example.

Almost all GPS would either be serial or serial-over-USB, depending on if you want PPS timing input as well (many USB models don't provide PPS).

Microphones are likely cheapest and easiest to handle via USB if you want them to be treated as standard audio input devices. Your only constraint here is that you have enough USB bandwidth and power available to run them.

Cameras are the trickiest sensors to handle reliably, given their bandwidth demands. In my experience, systems with multiple ethernet cameras have been significantly more reliable than systems with multiple USB cameras. On embedded platforms you may be able to use MIPI CSI or GMSL camera connections supported by the hardware instead. If you are going to use a Jetson platform, the onboard MIPI CSI interfaces may be sufficient for your camera needs.

edit flag offensive delete link more

Question Tools

3 followers

Stats

Asked: 2022-07-19 20:09:26 -0600

Seen: 723 times

Last updated: Jul 22 '22