ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange |
1 | initial version |
You could use depth_image_proc register on one rgbd camera to transform it into the frame (and resolution and instrinsics) of the other rgbd camera (register calls the destination CameraInfo rgb/camera_info
), or transform both rgbd cameras into a third frame and combine there.
Then there needs to be a nodelet to take two depth images in the same frame and with the same resolution and intrinsics/distortion (and same timestamp or else additional error) and combine each pixel of each source- pass through the closer valid depth value of the two would make sense, or average or something else. I don't think that nodelet exists anywhere open source (but also haven't looked much), but it wouldn't be hard to adapt the code of another nodelet in depth_image_proc to do it (and ideally upstream it into depth_image_proc).
depth_image_proc/register
isn't currently doing the right thing with synchronization and timestamps of the two camera_info inputs so the results will be sloppy, be less correct when the cameras or what they are looking at are moving fast relative to frame rate, unless they are time synchronized at acquisition. It needs to use the destination camera_info to drive the output timestamp, and do a 'full' transform where if transforms the input depth image at the time of the input depth image into the destination frame at the destination camera_info timestamp, using a third frame as a fixed reference (e.g. map or odom).
If the two camera frames aren't that close to each other you may need https://github.com/ros-perception/image_pipeline/pull/722.