ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
2

Message size of >=520 bytes result in inconsistent publish timing

asked 2015-10-29 17:00:00 -0600

Kogut gravatar image

updated 2015-10-29 21:35:24 -0600

Publishing a message of size 519 bytes at 100Hz results in this output for 'rostopic hz':

average rate: 99.997 min: 0.010s max: 0.010s std dev: 0.00006s window: 996

Increasing the message size by one byte to 520 bytes results in this output for 'rostopic hz':

average rate: 99.985 min: 0.000s max: 0.044s std dev: 0.01651s window: 4908

This is a custom message defined as an array of int8. But the behavior is independent of message type. Any message of size >519bytes exhibits the same effect. The behavior is also independent of Python/C++ and queue size. The effect is dependent on publish rate, but starts to appear around a rate of 35Hz.

The much larger std. dev. is caused by a seeming buffer effect whereby the larger messages come in groups of 3-5 messages vs. the much more consistently timed publishing rate of the smaller message.

Is this a known behavior? It is surprising to me because 520 bytes isn't a terribly large message size. The nav_msgs::Odometry message is larger.

While I used a custom message to narrow down the precise behavior above, this code that simply uses the Odometry message (>519 bytes) will replicate the issue:

#!/usr/bin/env python

import rospy
from nav_msgs.msg import Odometry


def odomer():
    pub=rospy.Publisher('odomer',Odometry,queue_size=100)
    rospy.init_node('odomer',anonymous=True)
    rate=rospy.Rate(40)
    while not rospy.is_shutdown():
        odom_msg = Odometry()
        pub.publish(odom_msg)
        rate.sleep()


if __name__ == '__main__':
    try:
        odomer()
    except rospy.ROSInterruptException:
        pass
edit retag flag offensive close merge delete

Comments

Please share your sample code which reproduces this issue.

ahendrix gravatar image ahendrix  ( 2015-10-29 17:48:22 -0600 )edit
1

Done. also significantly edited the question to reflect my further isolation of the issue. It has nothing to do with the Odometry message (other than its size), but kept use of Odometry in my code for its ease of use vs. the custom message I used to identify the precise size.

Kogut gravatar image Kogut  ( 2015-10-29 21:37:18 -0600 )edit

Is this all on a single machine, or is there a network involved?

gvdhoorn gravatar image gvdhoorn  ( 2015-10-30 02:34:30 -0600 )edit

I have also observed this behavior, and was also using Odometry messages at the time. Using tcpNoDelay solved the issue for me.

Tom Moore gravatar image Tom Moore  ( 2016-06-16 03:16:32 -0600 )edit

1 Answer

Sort by ยป oldest newest most voted
2

answered 2015-10-30 02:53:37 -0600

updated 2015-10-30 02:55:48 -0600

We also noted this previously and it looks like this indeed generally starts showing up when exceeding a specific message size (apparently 519 bytes). When using the ros::TransportHints().tcpNoDelay(true) subscriber option, the problem disappeared for me in one crucial use case (example here), suggesting that the observed symptoms are related to Nagle's algorithm at work. I don't completely understand how, however, as smaller sizes should be affected as well. I also seem to remember others reporting not seeing an improvement using the tcpNoDelay option, so I'd also be interested in a explanation.

The two options to try and reduce delays I see are using tcpNoDelay as mentioned or using udpros (which limits allowed message size and might has other caveats depending on use case).

/edit: It might be the case that all instances of this issue I've seen were ones where data was actually sent over the wire (Ethernet) as opposed to local loopback.

edit flag offensive delete link more

Question Tools

3 followers

Stats

Asked: 2015-10-29 16:59:29 -0600

Seen: 1,319 times

Last updated: Oct 30 '15