ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

I managed it to get it done with cv_bridge, as suggested by chrissunny94. As I mentioned in the comments to my answer, it worked only to change from 16UC1 to 32FC1 encoding, not vice verca. Because the value ranges after the conversion differed greatly (despite being at roughly the same distance in real & in the simulation), I simply normalized both converted image arrays (the simulated & the real one) to be between 0 and 255. If anyone knows a better solution to this (or why they have different value ranges at all) please let me know. Anyway, this is what I've implemented in my Python program:

bridge = CvBridge()
data = None
cv_image = None
while(data is None or cv_image is None):
    try:
        data = rospy.wait_for_message('/camera/depth/image_raw', Image, timeout=5)
        # encoding of simulated depth image: 32FC1, encoding of real depth image: 16UC1
        # converting & normalizing image, so that simulated & real depth image have the same encoding & value range
        # see http://wiki.ros.org/cv_bridge/Tutorials/ConvertingBetweenROSImagesAndOpenCVImagesPython
        cv_image = bridge.imgmsg_to_cv2(data, desired_encoding="32FC1")
    except rospy.exceptions.ROSException:
        print("failed to read depth image from simulated kinect camera")
        print(e)
    except CvBridgeError:
        print("failed to convert depth image into another encoding")
        print(e) 
# while

image = np.array(cv_image, dtype=np.float)
print(type(image)) # <type 'numpy.ndarray'>
print(image.shape) # (480, 640)
# normalizing image
cv2.normalize(image, image, 0, 255, cv2.NORM_MINMAX)
# round to integer values
image = np.round(image).astype(np.uint8)