ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
0

Difference in the delay time between ROS and ROS2 for RT applications ?

asked 2018-04-16 07:39:28 -0600

aks gravatar image

updated 2018-04-18 02:30:38 -0600

I know that ROS is made for Real Time (RT) applications but why ? This should obviously depend on the complexity of the application. Since ROS uses a MASTER as the name service, there has to be a certain delay in the signal exchange just for the 1st time but i guess once the publisher and subscriber nodes are connected, there should be a minimal delay. ROS2 doesnt have a MASTER as it uses a distributed discovery mechanism and thus the message exchange can start instantly.

Now my questions :

  1. Is this the only difference between the delay times in ROS and ROS2 ? What exactly is the time delay for a normal ROS application. Is it in a few milliseconds or some seconds or does it totally depend on the types of signals exchanged. What is the real-time factor ? can the delay be monitored ?
  2. Upto what frequencies can ROS handle signals ? 1-100Hz, or in the range of KHz or MHz ?

EDIT 1: Let us take a Simulation Example of a Vehicle. We have 3 nodes namely Driver, Engine and Drivetrain. Engine Supplies torque to the drivetrain and receives back the speed from the drivetrain and the control (i.e. how much torque is required and gear shifting) is done by the driver. Thus it is a closed loop where all the nodes need to exchange messages instantly.

If there is any delay in message exchange (beyond a tolerable), the behaviour of the car would totally change.

For this application, can I use a ROS system :

  1. Can I build all three nodes in ROS ? Be it on the same or different PC's.
  2. Lets say I build these 3 functions (instead) of nodes in different simulation tools (e.g. Simulink). Then can I use ROS as a message exchanging service between them ? Or is it possible only in ROS2 (through DDS).
  3. If yes, thn what would be the CPU load, delay times etc ?
  4. Which timestamps would be used by ROS ? How are the timestamps handled when different PC's are used ? Is it done by the Master ?

EDIT 2: For my application, i need to exchange messages every 1ms i,e, at a frequency of 1KHz.

I hope it is a bit clearer now ?

edit retag flag offensive close merge delete

Comments

I know that ROS is made for Real Time (RT) applications [..]

You might want to be careful with these sort of assertions. Both with what you mean with 'real-time' and with stating something like that.

real-time != (on-line || fast enough).

ROS 1 was also not designed to be hard RT at all.

gvdhoorn gravatar image gvdhoorn  ( 2018-04-16 09:13:02 -0600 )edit

and thus the message exchange can start instantly

how? Do nodes somehow not have to find each other and setup communication channels in ROS 2?

I'm rather confused about your statements here tbh.

gvdhoorn gravatar image gvdhoorn  ( 2018-04-16 09:14:01 -0600 )edit

ok...i meant the delay in ROS2 would be lesser than ROS1 because of the distributed discovery mechanism. But what is approx a delay time for signals exchanged in ROS ?

aks gravatar image aks  ( 2018-04-16 09:23:10 -0600 )edit

There is no answer to this I believe. On what networks? Using what computer platforms? How much memory? Under what load? How many nodes? How many connections / which connection topology? Etc.

Could I ask you to update your question with some more careful wording?

gvdhoorn gravatar image gvdhoorn  ( 2018-04-16 09:24:43 -0600 )edit

Upto what frequencies can ROS handle signals ? 1-100Hz, or in the range of KHz or MHz ?

This is too broad a question. It depends on what type of data you're passing (point cloud or an integer?) and how capable is your hardware. ...

mohsen gravatar image mohsen  ( 2018-04-16 14:37:19 -0600 )edit

Also, in applications requiring high update rates (more than 1 KHz), there is often a need for very low jitter. (Ideally, for 1 KHz update rate, you want to send a message every 1 ms). Since ROS is running on a fully fledged, non-real time OS, the jitter can be quite high.

mohsen gravatar image mohsen  ( 2018-04-16 14:47:44 -0600 )edit

And I believe that the best way for assessing real-time performance of your system would be to first implement it and then measure.

mohsen gravatar image mohsen  ( 2018-04-16 15:13:01 -0600 )edit

The distributed discovery used in ROS 2 can actually lead to longer start up delays than the start up delay caused by the use of the master in ROS 1. You use the word "instantly" a lot but there is no "instantly" in software. You need to specify your timing requirements properly for us to help.

Geoff gravatar image Geoff  ( 2018-04-17 18:40:32 -0600 )edit

2 Answers

Sort by ยป oldest newest most voted
3

answered 2018-04-18 02:36:58 -0600

Geoff gravatar image

If you have a hard real-time requirement of message exchange at 1 KHz, then ROS 1 will not meet your needs.

ROS 2 in theory will meet your needs but I am not sure how up-to-date the real-time support is. You would still need to ensure you use a real-time operating system and a real-time capable DDS implementation, as well as write your own nodes to be real-time capable. ROS 2 can support real-time but because so much else is involved, actually making an application real-time is up to you.

If you have only a soft real-time requirement (meaning you can accept your control loop sometimes taking longer), then you could use ROS 1, but I do not think you would get stable performance. Most people who want to do real-time control with ROS 1 write a separate real-time-capable piece of software for that and link it to ROS, or they use Orocos (with or without ROS).

No one can tell you the CPU load, launch delay time, or anything else about your application without having your application software and an exact replica of your execution environment. You will need to measure these sorts of values yourself).

edit flag offensive delete link more

Comments

Thanks @Geoff for your kind explanation. 1 question : What makes ROS2 RT-capable compared to ROS ?

aks gravatar image aks  ( 2018-04-18 02:40:08 -0600 )edit

And does that also mean, that ROS might not be the best choice for hard RT components but can work quite good for the message exchange in a simulation environment where a small delay is acceptable compared to the real hardware ?

aks gravatar image aks  ( 2018-04-18 02:42:04 -0600 )edit

ROS 2 is real-time capable because it has been designed and implemented to be so. This includes things like performing memory allocation in the correct places.

Geoff gravatar image Geoff  ( 2018-04-18 19:03:39 -0600 )edit

ROS 1 can be used for the simulated version because the simulator itself is not real time. You should design your nodes to work on the time provided by the ros::Time API so they can be driven in sync with the simulation.

Geoff gravatar image Geoff  ( 2018-04-18 19:04:12 -0600 )edit

@Geoff : For the simulated version, you mean synchronising the nodes with the ROS time stamp ?

aks gravatar image aks  ( 2018-04-20 04:53:09 -0600 )edit

When you use time information in your nodes for processing data (e.g. calculating velocities), you should use the ros::Time API. Then, if you use a simulator, your nodes will use the simulated time, which will run at the correct rate according to the simulation.

Geoff gravatar image Geoff  ( 2018-04-20 08:23:30 -0600 )edit

Then in this case, if multiple nodes are being run on different machines, then how are they synchronised for the correct measurement analysis ? Does Master do this synscronization ? Do all the nodes need to use the ros::Time API ?

aks gravatar image aks  ( 2018-04-20 09:36:34 -0600 )edit

The master does not do synchronisation of execution or time. When a node uses time in processing data, it should use the ros::Time API as the source. However, this does not synchronise execution. For that you need to run them in the same process or look at real-time protocols like TTP.

Geoff gravatar image Geoff  ( 2018-04-22 18:12:48 -0600 )edit
1

answered 2018-04-27 02:40:08 -0600

dejanpan gravatar image

Couple of things which we found in ROS1 that are making it non hard RT capable:

  1. use of STL threads which neither have timeout argument (can not be pre-empted) not argument for the priority control
  2. memory allocation/de-alocation on the runtime which fragments the memory (as oppose to allocation of memory on the startup only)
  3. use of potentially blocking calls: malloc, free, fopen, fread, fwrite, ...
  4. use of non lock-free queues (example of lock-free queue http://natsys-lab.blogspot.com/2013/0... )

See also this very good article: https://github.com/ros2/ros2/wiki/Rea... .

In general as Geoff already mentioned above, for real-time, you need a host OS that provides certain guarantees such as scheduling sequence with a proper scheduler, always enough compute time, static memory, etc. Then you need to have fixed all of the points above in ROS (in ROS2 we are doing this as we speak) and finally you also need to avoid having the same in your application. In the final step you need to design and optimize your application by using tools such as valgrind or LTTng to make sure that you don't spend too much waiting around.

edit flag offensive delete link more

Question Tools

1 follower

Stats

Asked: 2018-04-16 07:39:28 -0600

Seen: 2,410 times

Last updated: Apr 27 '18