ros indigo / xtion : getting depth from pixel coordinates

asked 2016-09-25 14:44:43 -0500

vincent78 gravatar image

Hi,

Giving x,y pixels coordinate in camera/rgb/image_raw, I am trying to extract the related depth from camera/depth_registered/image_raw. I start the xtion via "roslaunch openni2_launch openni2"

Running "rosrun rqt_reconfigure rqt_reconfigure" and checking camera/driver confirms depth_registration is active. Also, displaying the /camera/depth_registered/points in rviz shows something that looks nice.

It seems to me that when using an xtion pro live, there is no need for calibration, so I did not do any.

It also seems to me that if registration is done, coordinates in the depth image and in the rgb image are the same, so I do the following in python:

in the callback for the rgb image (bridge is an instance of CvBridge):

cv_image = bridge.imgmsg_to_cv2(image, image.encoding)
# code for getting x,y coordinate of a pixel of interest in the image

I run some code that displays the image and shows the pixel x,y:

cv2.circle(cv_image,(x,y),2,(255,0,0),-1)

this confirms x,y are correct, in this case at the center of a colored ball in front of the camera

callback for depth image:

image_cv = bridge.imgmsg_to_cv2(depth_image, depth_image.encoding)
depth_image = np.squeeze(np.array(image_cv, dtype=np.float32))
depth = float(depth_image[x][y])

but I just get "0.0" for the depth, no matter where the ball is in the field of vision.

Anything I am doing incorrectly ?

edit retag flag offensive close merge delete