Kinect calibration: disparity or depth?
Reading the technical description in ROS wiki (http://www.ros.org/wiki/kinect_calibration/technical), one would think that kinect devices produce disparity images (with a specific normalization) and efforts should be done to estimate a baseline separation between IR camera and IR projector and a specific constant offset. However, by ROS code inspection, it appears to me that the driver gives a depth map rather than disparity and no baseline/offset calibration is used.
Is ROS kinect calibration currently based only on a zero distortion calibration of the IR camera (thus assuming an accurately measured depth and estimating real-world XY), or am I missing something?
Thanks.