ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
3

How to get real-world width & height of Kinect video patch?

asked 2011-05-30 05:31:59 -0600

Pi Robot gravatar image

updated 2016-10-24 09:10:07 -0600

ngrennan gravatar image

Hello,

I am trying to compute the real-life size (width and height in meters) of a face patch that I have detected using a Kinect and OpenCV's Haar detector. I know the pixel dimensions of the patch and I know the distance to the points in the patch. My guess on how to get the approximate real-life dimensions of the patch is to simply multiply the average distance to the patch by the arc lengths subtended by the width and height of the patch. To get the arc lengths in radians, I use the size of the FOV of the Kinect which is around 57.8 degrees for the IR image and 62.7 degrees for the RGB image (http://www.ros.org/wiki/kinect_calibration/technical).

Is this the best way to do this, or is there a way I can use the calibration data from the Kinect's camera_info topic to do the same thing more directly?

Thanks!

patrick

EDIT: I am using Python which limits my use of PCL for this task.

edit retag flag offensive close merge delete

2 Answers

Sort by ยป oldest newest most voted
5

answered 2011-05-30 06:01:44 -0600

If I understood the question correctly, then there's a much cleaner solution:

The openni_camera driver outputs the data in two formats - as an image, and as a PointCloud. The point cloud is structured like an image - it's a 640x480 array, and each item in the array has RGB color as well as (x, y, z) coordinates. You can perform your detection on the image, and then look up the corresponding pixels in the PointCloud. The points in the pointcloud have the real-world values for (x, y, z) in meters, so you can get the size from there, without going through trigonometric computations.

edit flag offensive delete link more

Comments

Thanks Ivan. I'm using Python for my ROS node and so I'm not sure I have access to the necessary PCL structures to do it this way. For example, after subscribing to /camera/rgb/points and setting up a callback, how would I extract the z coordinate from a point based on its x-y coordinates in the image plane?
Pi Robot gravatar image Pi Robot  ( 2011-05-30 14:31:17 -0600 )edit
And a followup question: I am using /camera/depth/image to get my distance values for an x-y point in /camera/rgb/image_color. Do you happen to know if these two images are registered?
Pi Robot gravatar image Pi Robot  ( 2011-05-30 14:34:54 -0600 )edit
2

answered 2011-05-30 09:06:24 -0600

Ivans answer is the easiest way. However, if you have cpu-limitations, it will be more efficient to use the camera model and the depth value from the depth image to backproject the three points you need. Don't forget to rectify the points first.

edit flag offensive delete link more

Comments

Thanks Felix. Since I'm using Python (I should have mentioned that in my question), I'll need to looked into how to use these methods in the Python image_geometry API. In the meantime, do you know the answer to my second comment above to Ivan's answer? (Just in case Ivan does not.) Thanks!
Pi Robot gravatar image Pi Robot  ( 2011-05-30 14:37:28 -0600 )edit

Question Tools

Stats

Asked: 2011-05-30 05:31:59 -0600

Seen: 1,516 times

Last updated: May 31 '11