Message size of >=520 bytes result in inconsistent publish timing
Publishing a message of size 519 bytes at 100Hz results in this output for 'rostopic hz':
average rate: 99.997 min: 0.010s max: 0.010s std dev: 0.00006s window: 996
Increasing the message size by one byte to 520 bytes results in this output for 'rostopic hz':
average rate: 99.985 min: 0.000s max: 0.044s std dev: 0.01651s window: 4908
This is a custom message defined as an array of int8. But the behavior is independent of message type. Any message of size >519bytes exhibits the same effect. The behavior is also independent of Python/C++ and queue size. The effect is dependent on publish rate, but starts to appear around a rate of 35Hz.
The much larger std. dev. is caused by a seeming buffer effect whereby the larger messages come in groups of 3-5 messages vs. the much more consistently timed publishing rate of the smaller message.
Is this a known behavior? It is surprising to me because 520 bytes isn't a terribly large message size. The nav_msgs::Odometry message is larger.
While I used a custom message to narrow down the precise behavior above, this code that simply uses the Odometry message (>519 bytes) will replicate the issue:
#!/usr/bin/env python
import rospy
from nav_msgs.msg import Odometry
def odomer():
pub=rospy.Publisher('odomer',Odometry,queue_size=100)
rospy.init_node('odomer',anonymous=True)
rate=rospy.Rate(40)
while not rospy.is_shutdown():
odom_msg = Odometry()
pub.publish(odom_msg)
rate.sleep()
if __name__ == '__main__':
try:
odomer()
except rospy.ROSInterruptException:
pass
Please share your sample code which reproduces this issue.
Done. also significantly edited the question to reflect my further isolation of the issue. It has nothing to do with the Odometry message (other than its size), but kept use of Odometry in my code for its ease of use vs. the custom message I used to identify the precise size.
Is this all on a single machine, or is there a network involved?
I have also observed this behavior, and was also using Odometry messages at the time. Using tcpNoDelay solved the issue for me.