Stereo camera standards in ROS
Hi everyone,
I am trying to use stereo cameras to use RTABMap for 3D mapping. The problem is that the driver is horrible. I have found that:
- When my SP1 hardware is in rectify mode, it publishes the right image on the left topic and the left image on the disparity topic.
- The camera info is stored inside their own message type nerian_stereo::StereoCameraInfo of which holds two camera info messages that are not complete.
Header header Camera information for the left camera sensor_msgs/CameraInfo left_info # Camera information for the right camera sensor_msgs/CameraInfo right_info # Disparity-to-depth mapping matrix in 4x4 row-major format. Use this # matrix together with the Reconstruct3D class from libvisiontransfer # to transform a disparity map to 3D points. float64[16] Q # Translation vector between the coordinate systems of the left and # right camera. float64[3] T_left_right # Rotation matrix between the coordinate systems of the left and right # camera in 3x3 row-major format. float64[9] R_left_right
- There is only a single coordinate frame not multiple for each camera being a child of the overall system.
So clearly I have to fix their driver to be able to use it... I don't know the standards as to stereo cameras in ROS so I have the following questions:
- The StereoCameraInfo message contains a translation vector and a rotation 3x3 row major matrix. How do I apply these to provide the necessary transforms for ROS?
- The camera info message has the matrix P where Tx_left = 0 and Tx_right = -fx' * baseline. This is all 0 in their topics... Would Tx be equal to -T[0] in their message?
- The translation vector in their message also provides a translation for y and z, ROS assumes these to be 0 in stereo cameras. Would I include these in the P matrix?
- Should the rotation matrix in their message be used on the broadcast publisher?
- Is the translation and rotation in their message in the right frame relevant to the left frame where left is assumed to be 0 0 0 0 0 0?
- I read in the camerainfo message that the coordinate frame should contain +x to the right, +y downward, and +z into the image... The driver has a parameter ros_coordinate_system of which it uses ENU frame (RHR). Since it only uses a single frame for both images, should I keep this as true and then publish the transforms correct in my to be written publisher.
Furthermore, I asked a question on their GIT about the camera info to which they said they made their own since there is no support for stereo in ROS. The CameraInfo message documentation literally points to stereo specific variables and standards. So it appears to me that I am required to fix this driver myself, this is why I am asking for assistance on the ROS standards and conventions to stereo imagery.
Thank you, Grant. :)
EDIT an example message is
---
header:
seq: 17
stamp:
secs: 1564101175
nsecs: 102466534
frame_id: "nerian_cam"
left_info:
header:
seq: 121
stamp:
secs: 1564101175 ...