ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange |
1 | initial version |
Well, I found some workarounds.
One way to ensure the Application IDs (ergo GUIDs) for the two DDS participants in separate containers using the host's network driver do not collide is to merely have the containers also share a common PID namespace; thus allocating unique PIDs the process of the DDS participants.
You can either set the PID (Process) Namespace mode for the container to 'container:<name|id>
to joins another container's PID namespace, or host
to use the host's PID namespace inside the container. The later sort of defeats any namespace isolation, and the later can result in some unintended behaviors when restarting containers. Additionally, when using compose, race conditions may surface when docker-compose attempts to initiate a container with a PID namespace of another container that has itself not been started.
https://docs.docker.com/engine/reference/run/#pid-settings---pid
https://docs.docker.com/compose/compose-file/#pid
Here is an example where an third intermediary container is used to host the PID namespace (and sharable IPC set for Inter-Process communication), yet compose will error with the message on how the container service intermediary
is not available. Attempting to resolve the use by declaring the bootstrapping dependency via the depends_on
setting does not resolve this, so one must manually create such a container (say by running docker-compose up
with only the intermediary service uncommitted out, then repeating up with the rest uncommented).
version: '3'
services:
intermediary:
build: .
ipc: shareable
command: sleep infinity
talker:
build: .
environment:
- "ROS_DOMAIN_ID=0"
- "RMW_IMPLEMENTATION=rmw_connext_cpp"
# - "RMW_IMPLEMENTATION=rmw_fastrtps_cpp"
# - "RMW_IMPLEMENTATION=rmw_opensplice_cpp"
network_mode: "host"
ipc: container:demo_intermediary_1
pid: container:demo_intermediary_1
command: ros2 run demo_nodes_cpp talker
depends_on:
- "intermediary"
listener:
build: .
environment:
- "ROS_DOMAIN_ID=0"
- "RMW_IMPLEMENTATION=rmw_connext_cpp"
# - "RMW_IMPLEMENTATION=rmw_fastrtps_cpp"
# - "RMW_IMPLEMENTATION=rmw_opensplice_cpp"
network_mode: "host"
ipc: container:demo_intermediary_1
pid: container:demo_intermediary_1
command: ros2 run demo_nodes_cpp listener
depends_on:
- "intermediary"
It would seem this is a bit cumbersome when using docker-compose. As mentioned setting pid to host
simplifies a few of these race condition issues, but lose some level of isolation that may be preferable. A better approach would be configure the Application ID to be unique through QoS settings, though there doesn't seem to be a simple way to achieve this for ROS2 as of writing.
2 | No.2 Revision |
Well, I found some workarounds.
One way to ensure the Application IDs (ergo GUIDs) for the two DDS participants in separate containers using the host's network driver do not collide is to merely have the containers also share a common PID namespace; thus allocating unique PIDs the process of the DDS participants.
You can either set the PID (Process) Namespace mode for the container to 'container:<name|id>
to joins another container's PID namespace, or host
to use the host's PID namespace inside the container. The later sort of defeats any namespace isolation, and the later can result in some unintended behaviors when restarting containers. Additionally, when using compose, race conditions may surface when docker-compose attempts to initiate a container with a PID namespace of another container that has itself not been started.
https://docs.docker.com/engine/reference/run/#pid-settings---pid
https://docs.docker.com/compose/compose-file/#pid
Here is an example where an third intermediary container is used to host the PID namespace (and sharable IPC set for Inter-Process communication), yet compose will error with the message on how the container service intermediary
is not available. Attempting to resolve the use by declaring the bootstrapping dependency via the depends_on
setting does not resolve this, so one must manually create such a container (say by running docker-compose up
with only the intermediary service uncommitted out, then repeating up with the rest uncommented).
version: '3'
services:
intermediary:
build: .
ipc: shareable
command: sleep infinity
talker:
build: .
environment:
- "ROS_DOMAIN_ID=0"
- "RMW_IMPLEMENTATION=rmw_connext_cpp"
# - "RMW_IMPLEMENTATION=rmw_fastrtps_cpp"
# - "RMW_IMPLEMENTATION=rmw_opensplice_cpp"
network_mode: "host"
ipc: container:demo_intermediary_1
pid: container:demo_intermediary_1
command: ros2 run demo_nodes_cpp talker
depends_on:
- "intermediary"
listener:
build: .
environment:
- "ROS_DOMAIN_ID=0"
- "RMW_IMPLEMENTATION=rmw_connext_cpp"
# - "RMW_IMPLEMENTATION=rmw_fastrtps_cpp"
# - "RMW_IMPLEMENTATION=rmw_opensplice_cpp"
network_mode: "host"
ipc: container:demo_intermediary_1
pid: container:demo_intermediary_1
command: ros2 run demo_nodes_cpp listener
depends_on:
- "intermediary"
It would seem this is a bit cumbersome when using docker-compose. As mentioned setting pid to host
simplifies a few of these race condition issues, but lose some level of isolation that may be preferable. A better approach would be configure the Application ID to be unique through QoS settings, though there doesn't seem to be a simple way to achieve this for ROS2 as of writing.
3 | No.3 Revision |
Well, I found some workarounds.
One way to ensure the Application IDs (ergo GUIDs) for the two DDS participants in separate containers using the host's network driver do not collide is to merely have the containers also share a common PID namespace; thus allocating unique PIDs the process of the DDS participants.
You can either set the PID (Process) Namespace mode for the container to 'container:<name|id>
to joins another container's PID namespace, or host
to use the host's PID namespace inside the container. The later sort of defeats any namespace isolation, and the later former can result in some unintended behaviors when restarting containers. Additionally, when using compose, race conditions may surface when docker-compose attempts to initiate a container with a PID namespace of another container that has itself not been started.
https://docs.docker.com/engine/reference/run/#pid-settings---pid
https://docs.docker.com/compose/compose-file/#pid
Here is an example where an third intermediary container is used to host the PID namespace (and sharable IPC set for Inter-Process communication), yet compose will error with the message on how the container service intermediary
is not available. Attempting to resolve the use by declaring the bootstrapping dependency via the depends_on
setting does not resolve this, so one must manually create such a container (say by running docker-compose up
with only the intermediary service uncommitted out, then repeating up with the rest uncommented).
version: '3'
services:
intermediary:
build: .
ipc: shareable
command: sleep infinity
talker:
build: .
environment:
- "ROS_DOMAIN_ID=0"
- "RMW_IMPLEMENTATION=rmw_connext_cpp"
# - "RMW_IMPLEMENTATION=rmw_fastrtps_cpp"
# - "RMW_IMPLEMENTATION=rmw_opensplice_cpp"
network_mode: "host"
ipc: container:demo_intermediary_1
pid: container:demo_intermediary_1
command: ros2 run demo_nodes_cpp talker
depends_on:
- "intermediary"
listener:
build: .
environment:
- "ROS_DOMAIN_ID=0"
- "RMW_IMPLEMENTATION=rmw_connext_cpp"
# - "RMW_IMPLEMENTATION=rmw_fastrtps_cpp"
# - "RMW_IMPLEMENTATION=rmw_opensplice_cpp"
network_mode: "host"
ipc: container:demo_intermediary_1
pid: container:demo_intermediary_1
command: ros2 run demo_nodes_cpp listener
depends_on:
- "intermediary"
It would seem this is a bit cumbersome when using docker-compose. As mentioned setting pid to host
simplifies a few of these race condition issues, but lose some level of isolation that may be preferable. A better approach would be configure the Application ID to be unique through QoS settings, though there doesn't seem to be a simple way to achieve this for ROS2 as of writing.