Best practices for professional(ish) robot viz videos

asked 2018-04-02 17:58:52 -0500

vpradeep gravatar image

updated 2018-04-02 17:59:58 -0500

I'm trying to generate a high quality video of ROS sensor data & a robot. A summary of the data:

  • A moving/articulated robot (all link poses being published in TF)
  • sensor_msgs/LaserScan from a moving SICK lidar (pose published in TF)
  • sensor_msgs/PointCloud2 from a moving Velodyne unit (pose published in TF)

So far, I'm playing back a rosbag, running my screencapture tool to record my RViz opengl window, and then manually tweaking camera poses, sensor transparency settings, and other displays to get things looking just right. However, does anyone have any suggestions or best practices for how to do this type of scripted video in a more reliable fashion?

I've already come across building blocks that enable camera control in RViz: https://answers.ros.org/question/2058... & https://answers.ros.org/question/1034... . However, I'd probably still have to build additional tools on top of this to set up my camera trajectories, get the camera motion synced with the bagfile, and then enable scripted control of what data is being visualized. This all starts sounding like the beginnings of a DIY 3D keyframing tool (and thus, a ton of work).

An alternative approach would be to do all the scripting in blender. Has anyone had success with importing TF, LaserScan, & PointCloud2 data into blender for video generation? Are there any tools that people would suggest that I use? Note that this is more complicated than simply importing a single point cloud or mesh. I want to import a stream of sensor data, ensuring that all the timestamps are preserved and displayed at the correct time. I'd also need TF data to ensure that my articulated robot mesh is being rendered corrected.

Any pointers or best practices would be greatly appreciated.

edit retag flag offensive close merge delete