Using camera calibration results in OpenCV
I've calibrated Kinect intrinsics (both depth and rgb) using camera_calibration module and saved a bunch of rgb images afterwards (calibration file was detected successfully; subscription to kinect1/rgb/image_color was used). Do I need to load distortion coefficients and camera matrix parameters into my OpenCV application which will process the images or all the required image processing was done by applying the calibration file and I need to feed a vector with zeros as distortion coefficients and a 3x3 zero matrix with 1 at (2,2) as camera matrix? I'm specifically interested in using solvePnP or solvePnPRansac to estimate object pose in OpenCV code (I'm also not sure which one to use feel free to comment about them as well).
My launch file to publish images:
<launch>
<arg name="camera1_id" default="A22596V03345310A" />
<arg name="camera2_id" default="A22596V03475310A" />
<arg name="camera1_name" default="kinect1" />
<arg name="camera2_name" default="kinect2" />
<arg name="depth_registration" default="true" />
<include file="$(find freenect_launch)/launch/freenect.launch">
<arg name="device_id" value="$(arg camera1_id)" />
<arg name="camera" value="$(arg camera1_name)" />
<arg name="depth_registration" value="$(arg depth_registration)" />
</include>
<include file="$(find freenect_launch)/launch/freenect.launch">
<arg name="device_id" value="$(arg camera2_id)" />
<arg name="camera" value="$(arg camera2_name)" />
<arg name="depth_registration" value="$(arg depth_registration)" />
</include>
</launch>