How to perform a good calibration for the Kinect?
Hello,
I am doing an app to merge point clouds. I am using kinect but i need to improve accuracy because I think my program is working (using Iterative closest point and detecting features such as SURF) but I don't get good results because the RGB camera transformation is not good enough to get "perfect point clouds" (I mean that i can see parts of objects at places where they shouldn't be, like seeing the edge of a box in the wall) This is the problem that make my results bad.
I have performed several calibration with the camera_calibration package in Ros fuerte to solve the problem but i get very different results and the calibration is done in the same way from each calibration. Some results:
[532.486920156757, 0, 313.943582352682, 0, 532.067717610884, 259.017556441565, 0, 0, 1]
[512.878303, 0 , 310.799036, 0, 513.067828 256.711335, 0, 0, 1]
In my opinion this results makes me think that i am doing something wrong in the calibration process but i follow exactly the tutorial http://www.ros.org/wiki/openni_launch/Tutorials/IntrinsicCalibration?action=show&redirect=openni_camera%2Fcalibration so i don't know what i am doing wrong.
Besides, there is some problems with the openni_launch as you can see at this post: http://answers.ros.org/question/53706/registered-depth-image-is-black/ I solved the problem but now i am wondering if the variable depth_regitration should be on or off.
Does anyone know what this hapens to me? could you give me any advice?
Thanks you very much! Antonio.