ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
0

slam_toolbox: map - odom transform stuck at time: 0.200

asked 2022-02-01 09:26:27 -0600

PatoInso gravatar image

updated 2022-02-02 03:15:36 -0600

Hello,

After migration from Foxy to Galactic I got the following issue:

[planner_server-18] [INFO] [1643728496.934987109] [global_costmap.global_costmap]: Timed out waiting for transform from base_link to map to become available, tf error: Lookup would require extrapolation into the past.  Requested time 0.200000 but the earliest data is at time 1643728495.556496, when looking up transform from frame [base_link] to frame [map]

The timestamp 0.2000 corresponds to the param transform_timeoutof the slam_toolbox(if I change this param, I get its value displayed in the message instead of the 0.2000) and is not updated, as if the TF was not updated.

This message get printed again and again, preventing my others nodes to works correctly. Sometime the issue disappears spontaneously after several seconds (but then get other messages).

The TF tree looks like this, the TF seems never updated: image description

This is exactly the same issue as described by this user but on Reddit.

The TF map<->odom is published by the slam_toolbox in localization mode.

The Tf is regularly published in /tf topic and looks like this:

transforms:
- header:
    stamp:
      sec: 0
      nanosec: 200000000
    frame_id: map
  child_frame_id: odom
  transform:
    translation:
      x: 0.0
      y: 0.0
      z: 0.0
    rotation:
      x: 0.0
      y: 0.0
      z: 0.0
      w: 1.0

What could be the root cause and how to solve this ?

edit retag flag offensive close merge delete

Comments

ok, this issue and my other oneare somehow linked. Under Galactic, the slam_toolbox sets the timestamp of the TF map-odom with the timestamp of the last laserscan + transform_timeout offset (https://github.com/SteveMacenski/slam...). Under Foxy, it was timestamped with now() + transform_timeout offset (https://github.com/SteveMacenski/slam...).

I replaced in Galactic the laser timestamp by now(), and my two issues got resolved... definitively some time issue

PatoInso gravatar image PatoInso  ( 2022-02-02 10:23:20 -0600 )edit

Also, slam_toolbox misses a lot of laserscan msg. A RCLCPP_INFO in the laser callback shows that sometimes, it is not called for several seconds (up to 10sec), while the scans are published at a solid 20Hz in /scan topic

PatoInso gravatar image PatoInso  ( 2022-02-02 10:25:12 -0600 )edit

2 Answers

Sort by ยป oldest newest most voted
0

answered 2022-02-08 09:31:18 -0600

PatoInso gravatar image

Ok, not so sure of the mechanism behind this, but it was related to timestamps issues caused by delays in my laserscan pipeline:

  • A first delay introduced by a Message Filter that cached some laserscan message to sync them in my scan merger. So when a new scan was received, the previous scan was typically outputed (so 1 sample of delay)
  • The struct of my laser_odometry package was so that the the TF odom <-> base_link was computed and timestamped with the latest laserscan, but was only published 100ms later (a structure with a process() then a spin_some() that processed obsolete data)

Solving this two issues and reducing the delay, so that the timestamp of the TF remains close of now(), solved it for me.

edit flag offensive delete link more
0

answered 2022-02-02 22:33:54 -0600

You're not running on the same clock, something is set to sim time but something else is using wall time which is pretty clear from the error message 0.200000 ... 1643728495.556496.

edit flag offensive delete link more

Comments

Everything is running on the same machine, and I checked again manually with ros2 param get that all use_sim_time are effectively at False. I've not found yet in my custom nodes something using a different clock or even a rclcpp::Clock().now() yet, they all work with the messages timestamps...

PatoInso gravatar image PatoInso  ( 2022-02-03 09:16:49 -0600 )edit

The 0.200 is because the slam_toolbox starts to publish the TF map <-> odom before any laserscan have been received. But putting the log level to DEBUG shows that the message filter is dropping nearly all incoming scans (reason: the queue is full). I use a scan merger and laser odometry to provide TF odom <-> base_link, but the TF tree seems complete... I suspect kind of circular dependency and some relation with the order in which the TF are initially published...

PatoInso gravatar image PatoInso  ( 2022-02-03 09:21:58 -0600 )edit

That could be the case, did you try adding a check condition in the publishTransformLoop and see that it works fine afterwards? If so, then I'd happily merge a PR that does so. Basically just check if the stamp time is 0 if so, pass. Else, use.

stevemacenski gravatar image stevemacenski  ( 2022-02-03 12:31:20 -0600 )edit

Adding this check did not help.

The slam_toolbox then never publish a TF because the message filter drops all incoming laserscan ([localization_slam_toolbox_node-15] [INFO] [1643993945.437412839] [slam_toolbox]: Message Filter dropping message: frame 'laser_merged' at time 1643993945.336 for reason 'discarding message because the queue is full').

Actually all this started to happen when I corrected TF and odom timestamps provided by my rf2o laser odometry (use the laserscan timestamp instead of now() ). So rf2o provides the TF odom <-> base_link, and my TF base_link <-> laser is static, but the slam_toolbox implicitely use it to get a TF laser <-> odom (according to the debug log of the message filter). I smell a circular dependency but can't figure out how or why all scans are dropped.

PatoInso gravatar image PatoInso  ( 2022-02-04 11:07:51 -0600 )edit

Actually it's not even consistent, sometimes all is stuck with the map <-> odom TF at 0.2000, and sometime the TF got published, but I get endless [planner_server-17] [INFO] [1643994867.201430973] [global_costmap.global_costmap_rclcpp_node]: Message Filter dropping message: frame 'laser_merged' at time 1643994866.856 for reason 'the timestamp on the message is earlier than all the data in the transform cache' as in here. Not seeing the link drives me crazy.

PatoInso gravatar image PatoInso  ( 2022-02-04 11:18:47 -0600 )edit

This seems like something is wrong in your setup regardless of slam toolbox

stevemacenski gravatar image stevemacenski  ( 2022-02-05 19:15:53 -0600 )edit

Question Tools

2 followers

Stats

Asked: 2022-02-01 09:26:27 -0600

Seen: 1,854 times

Last updated: Feb 08 '22