ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
0

2D point to 3D transformation(using Kinect) programmed by openCV

asked 2015-06-08 14:30:10 -0600

fei gravatar image

updated 2015-10-15 14:45:01 -0600

enter code here I am working on something related to 2D point to 3D transformation, which is new to me. Could anyone provide me an outline about doing this? better with explanation. Thank you!


I solved this problem.
1. read rgb image and depth image from kinect approximately, using http://wiki.ros.org/message_filters/ApproximateTime.
2. get the formula to calculate the value(x,y) from rgb image and z from depth image into 3D point.x, point.y, point.z

edit retag flag offensive close merge delete

2 Answers

Sort by ยป oldest newest most voted
0

answered 2015-06-10 15:13:15 -0600

I recommend taking a look at the /tf wiki page and learning how the ROS handles transforms. You can also take a look at this paper to learn a bit more detail than the wiki page provides.

Finally, you can take a look at Introduction to Autonomous Mobile Robots by Roland Siegwart and Illah R. Nourbakhsh.

edit flag offensive delete link more
0

answered 2015-06-10 13:44:43 -0600

updated 2015-06-10 16:26:38 -0600

Can you be a little more specific? If you are already using the Kinect, then you already have a 3D point cloud.

Edit:

It's very easy to perform calibration in ROS, using the camera_calibration package. The idea is that you match the VGA pixel to the DepthPoint of the Kinect IR (3D) camera. You can read more here.

You can also perform extrinsic calibration, following this tutorial. In general, look at the OpenNI tutorials to understand how to proceed.

edit flag offensive delete link more

Comments

I am doing object tracking. I use 2D because it is faster to run energy function compared to run it on 3D, but I want to have 3D as an initialization. Thank you.

fei gravatar image fei  ( 2015-06-10 15:18:43 -0600 )edit

Well, since that's the case, then all you need is to perform registration. That way you have the mapping of the 2D VGA camera of the Kinect with the 3D point cloud of the IR.

Progtologist gravatar image Progtologist  ( 2015-06-10 15:39:30 -0600 )edit

How could I map the 2D point to 3D? There are lots of information that I need, for example principle point, focusing length, radial distortion and trangential. Are they same for all the Kinects? Thank you.

fei gravatar image fei  ( 2015-06-10 15:59:56 -0600 )edit

Question Tools

1 follower

Stats

Asked: 2015-06-08 14:30:10 -0600

Seen: 1,579 times

Last updated: Oct 15 '15