Conceptual question on Ros + docker
Hi all, I've been reading the different ressources about Docker+Ros I found on the web + the video of Ruffin White talk in Roscon2015... So I see great potential of Docker with ROS for usecases such as testing and Continuous integration, or deployment on robots for a robot manufacturer etc... basically when what I need is to run a clean version, without adding any modification to the image/container....
But I think I'm missing a concept or at least I'd like discuss the usefulness of Docker on another usecase...
We are working on an internal "product" that is a big collection of ROS nodes. So we rely on ROS and ROS-Industrial packages, plus some third party drivers for hardware, specific libraries etc... We have our own Continuous Integration Buildbot server that generates a Debian from the catkin-install output (on different slaves to account for the linux/ros-distribution mix).
When we need to set-up a new machine, we need to install ubuntu, ros, then our script to install all the required libraries and dependencies, and finally install our Deb. Then start working, developping and customizing the final application for the robotic cell deployment.
So, building a Docker container to host all this looks like a great solution to provide a clean install of our product/solution both internally or to our external partners/clients. Another option would to provide a kind of Vagrant or Chef cookbook to automate the correct deployment of a new machine, external libraries script execution and my product deb installation... (any opinion on the most suitable strategy?)
However, and this is the point where I'm getting lost on the usefulness of Docker for my usecase, our product is NOT meant to be working isolated and alone... It's a collection of ros nodes to allow easier programming and deployment of industrial robotic cells. So it can be seen as library to be used by our developers and clients.
So If I am a user of this product, having a docker component would be great to start. But then I would develop my own robotic application/installation using this product + extra drivers form other Ros repo + my own nodes and application-specific Guis, my own config files and programs etc...
So, to my current understanding, I could "save" all this in my local version of the docker container... but when the official product docker container is updated, I would then loose all my specific configuration and application, right?
Or should I create a new docker container for my own developments ? and ask my clients to control Docker programming to be able to create their own application?
As you can see, I fully get the benefits of the docker container when the objective is to use something completely "closed" or ship an application that is just meant to be ran/executed.
But I dont' get the "how-To" when I need to work on top of a third-party provided container... And maintain the integrity of my work ...
A comment (to avoid others not answering): Docker can be useful in your situation, if you consider a container to Dockerise your ROS API. Container contains your pkgs and their dependencies + a way to launch them (remote
roslaunch
?). Then clients can write ROS apps 'against' the container.This avoids the lengthy setup process of all your pkg's dependencies (and theirs). It doesn't work that well for things that are not part of the ROS API level, such as libraries, as it's not so clean to export directories from your container to the host. Msg/svc defs can (and should!) be put in separate pkgs, so clients can install those outside the container without needing all the other complex bits.
Some difficult aspects that might reduce the return on investment: access to hw (possible, but involved sometimes), networking setup (docker compose) and versioning.
But how would they write these Ros apps against my container: would they need a full desktop install of ROS on their machine? and my container only contains libraries and my own nodes? or would they create another container with their nodes? thks
That's what I alluded to with "doesn't work that well for things not part of the ROS API": developers will need a ROS install, but won't need to install all your dependencies. Personally I don't like to develop inside a container, but it can be done.
Compare this to how you work with Baxter, it runs a ROS compatible interface, but you don't have access to, nor get to install, the sources for the packages running on it. There are developer packages, but those use a ROS API to communicate with the on-board computer(s). I'm not saying it's ideal, just that it's an option.
I think the main thing to remember / realise is that Docker containers are mostly meant to containerise processes that expose services, ie: things that run / communicate. I haven't really seen them used for distributing libraries and header files (other than for other overlayed containers).
The idea then is to avoid having to repeat a complicated setup and configuration process on each host where you'd like to run those services on. The container already contains everything that the service needs. The ROS API of your nodes is then that 'service' that you containerise. Not the sources.
Would it perhaps be an idea to post this question / discussion on the ROS discourse site instead? That site is a bit better suited for these kind of discussion for which there is no single 'best' answer. It could also increase the visibility of your discussion.
I didn't know the existence of the ROS discourse site! :-) In which section do you suggest I should post it?
I think general would work for now?