ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange |
1 | initial version |
The projection matrix P you get is computed by the OpenCV method getOptimalNewCameraMatrix()
used in the camera calibration node.
This method is used to control the output of the OpenCV undistortion method initUndistortRectifyMap()
. Normally after the remap()
some pixels would be outside of your image boundaries and some pixel regions in your image would be invalid (black), depending on the correction of your lens distortion.
The method getOptimalNewCameraMatrix()
takes the intrinsic matrix, the distortion coefficients and an alpha value specified by you with:
This gives you a new camera matrix (called P by the calibration node) that can be passed to initUndistortRectifyMap()
which computes the correct remaps depending on your used alpha value.
You can see the differences in this image:
h ttp://i.imgur.com/bOZ4fKs.jpg
The left one is with alpha = 0, the right one with alpha = 1 (note that this is not stereo, both images are the same, only "viewed" with a different camera matrix).
The camera calibration node does exactly that, with alpha = 0, and stores the new camera matrix as matrix P in the CameraInfo
.
2 | No.2 Revision |
The projection matrix P you get is computed by the OpenCV method getOptimalNewCameraMatrix()
used in the camera calibration node.
This method is used to control the output of the OpenCV undistortion method initUndistortRectifyMap()
. Normally after the remap()
some pixels would be outside of your image boundaries and some pixel regions in your image would be invalid (black), depending on the correction of your lens distortion.
The method getOptimalNewCameraMatrix()
takes the intrinsic matrix, the distortion coefficients and an alpha value specified by you with:
This gives you a new camera matrix (called P by the calibration node) that can be passed to initUndistortRectifyMap()
which computes the correct remaps depending on your used alpha value.
You can see the differences in this image:
h ttp://i.imgur.com/bOZ4fKs.jpghttp://i.imgur.com/bOZ4fKs.jpg
The left one is with alpha = 0, the right one with alpha = 1 (note that this is not stereo, both images are the same, only "viewed" with a different camera matrix).
The camera calibration node does exactly that, with alpha = 0, and stores the new camera matrix as matrix P in the CameraInfo
.
3 | No.3 Revision |
The projection matrix P you get is computed by the OpenCV method getOptimalNewCameraMatrix()
used in the camera calibration node.
This method is used to control the output of the OpenCV undistortion method initUndistortRectifyMap()
. Normally after the remap()
some pixels would be outside of your image boundaries and some pixel regions in your image would be invalid (black), depending on the correction of your lens distortion.
The method getOptimalNewCameraMatrix()
takes the intrinsic matrix, the distortion coefficients and an alpha value specified by you with:
This gives you a new camera matrix (called P by the calibration node) that can be passed to initUndistortRectifyMap()
which computes the correct remaps depending on your used alpha value.
You can see the differences in this image:
http://i.imgur.com/bOZ4fKs.jpg
(originally from http://i.imgur.com/bOZ4fKs.jpg)
The left one is with alpha = 0, the right one with alpha = 1 (note that this is not stereo, both images are the same, only "viewed" with a different camera matrix).
The camera calibration node does exactly that, with alpha = 0, and stores the new camera matrix as matrix P in the CameraInfo
.