different encodings of depth image for real & simulated kinect
Hi all
I've got a Gazebo simulation of a kinect camera, where I'm subscribing to the ROS topic /camera/depth/image_raw
, which is of type sensor_msgs/Image. I also have a real Kinect for Xbox 360 camera connected to my PC via USB, where I can also get the /camera/depth/image_raw
topic.
Unfortunately, these two messages are not exactly the same:
message from real camera:
header:
seq: 0
stamp:
secs: 1527861646
nsecs: 997300811
frame_id: "camera_depth_optical_frame"
height: 480
width: 640
encoding: "16UC1"
is_bigendian: 0
step: 1280
data: [0, 0, ... ] # array is 614'400 long, i.e. 2 entries per pixel
message from simulated camera:
header:
seq: 0
stamp:
secs: 27
nsecs: 854000000
frame_id: "camera_link"
height: 480
width: 640
encoding: "32FC1"
is_bigendian: 0
step: 2560
data: [9, 51, 243, ... ] # array is 1'228’800 long, i.e. 4 entries per pixel
The important difference here is the different encodings used, which results in different length of the data array. I don't know why this is the case, but nevertheless I would like to have the same encoding for both the real and the simulated depth image (I don't really care which one). How can I convert one encoding into the other? Also, is it possible to convert the data array so that it contains height * width entries, where each entry corresponds to the distance for this pixel in mm (or meter)?
I'm using the freenect drivers for the real kinect camera (sudo apt-get install libfreenect-dev, sudo apt-get install ros-indigo-freenect-launch
), because the Openni drivers didn't work for me, as I'm on a Ubuntu VM, where Openni has their problems with.
Thanks & Cheers!
http://wiki.ros.org/cv_bridge/Tutoria...
Try this to change the encoding .
I think , the main think you need to change is the frame_id . Not the encoding .
Thanks for your advise! Sorry for responding late, I wasn't around for a while. The cv_bridge did work more or less, but I couldn't encode from 32FC1 to 16UC1, as this results in a 480*640 array of only 0's. Vice verca it was ok, but the value ranges differed (~0.5 from simulated vs. ~500 from real)
So I simply normalized both arrays to be between 0 and 255 (I will post my code when it's working properly). But why do I need to change the frame_id and not the encoding, and how can this be done? Cheers