ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange |
1 | initial version |
Hello,
The Xtion depth images are 32FC1 images, but opencv can't really plot those floats (what pixel value for 3.22242 m ?). Checking out the image_view code, you can see that they are converting it to a 8UC3 for plotting :
void DisparityNodelet::imageCb(const stereo_msgs::DisparityImageConstPtr& msg)
{
// Check for common errors in input
if (msg->min_disparity == 0.0 && msg->max_disparity == 0.0)
{
NODELET_ERROR_THROTTLE(30, "Disparity image fields min_disparity and "
"max_disparity are not set");
return;
}
if (msg->image.encoding != sensor_msgs::image_encodings::TYPE_32FC1)
{
NODELET_ERROR_THROTTLE(30, "Disparity image must be 32-bit floating point "
"(encoding '32FC1'), but has encoding '%s'",
msg->image.encoding.c_str());
return;
}
// Colormap and display the disparity image
float min_disparity = msg->min_disparity;
float max_disparity = msg->max_disparity;
float multiplier = 255.0f / (max_disparity - min_disparity);
const cv::Mat_<float> dmat(msg->image.height, msg->image.width,
(float*)&msg->image.data[0], msg->image.step);
disparity_color_.create(msg->image.height, msg->image.width);
for (int row = 0; row < disparity_color_.rows; ++row) {
const float* d = dmat[row];
for (int col = 0; col < disparity_color_.cols; ++col) {
int index = (d[col] - min_disparity) * multiplier + 0.5;
index = std::min(255, std::max(0, index));
// Fill as BGR
disparity_color_(row, col)[2] = colormap[3*index + 0];
disparity_color_(row, col)[1] = colormap[3*index + 1];
disparity_color_(row, col)[0] = colormap[3*index + 2];
}
}
So I guess you could do the same for plotting. In python, I'm converting it to a 16UC1 (for saving) :
bridge = CvBridge()
cv_image_tmp = bridge.imgmsg_to_cv2(msg)
height, width, depth = cv_image_tmp.shape
max_m_for_kinect = 5.0 ## You'll have to check out online for exact value
cv_image_tmp = np.clip(cv_image_tmp,0,max_m_for_kinect) ## Filter all values > 5.0 m
scaling_factor = 65535/max_m_for_kinect
cv_image_tmp = cv_image_tmp*scaling_factor #scaling the image to [0;65535]
cv_image = np.array(cv_image_tmp,dtype=np.uint16) ## Creating the cv2 image
Or to a 8UC1 :
bridge = CvBridge()
cv_image_tmp = bridge.imgmsg_to_cv2(msg)
height, width, depth = cv_image_tmp.shape
max_m_for_kinect = 5.0 ## You'll have to check out online for exact value
cv_image_tmp = np.clip(cv_image_tmp,0,max_m_for_kinect) ## Filter all values > 5.0 m
scaling_factor = 255/max_m_for_kinect
cv_image_tmp = cv_image_tmp*scaling_factor #scaling the image to [0;255]
cv_image = np.array(cv_image_tmp,dtype=np.uint8) ## Creating the cv2 image