ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
1

Calibrating Kinect with surroundings

asked 2011-05-19 10:58:07 -0600

Aslund gravatar image

updated 2016-10-24 09:00:18 -0600

ngrennan gravatar image

Hey everyone

I am having a serious problem with my Kinect project, but since many are working with the Kinect camera, then I hope someone have solved this or atleast have an idea how to solve it.

My Kinect camera is able to find a suitable grasping point, but is done according to the optical frame of the Kinect camera. In order to use it with a robot arm, then I need to know how the base of the robot arm is located according to the camera.

I have tried to use markers on the robots end-effector and combine it with the knowledge of the forward kinematic, but apparently there is some serious errors with the markers (ArToolKitPlus), which means the distance on the z-axis is several centimeters off. In am not entirely sure why, but atleast I am looking for an alternative method to calibrate the Kinect camera with the environment and I hope someone have some ideas

Regards

edit retag flag offensive close merge delete

7 Answers

Sort by » oldest newest most voted
0

answered 2011-06-27 07:49:57 -0600

mmwise gravatar image

You might want to check out a submission to the ROS 3D contest: Automatic Calibration of the 3D pose of the Kinect with respect to the robot base. Uses fast ICP to track the 3D pose of the Kinect, then uses a global optimization to retrieve the static transformation. (http://www.ros.org/wiki/openni/Contests/ROS%203D/Automatic%20Calibration%20of%20Extrinsic%20Parameters)

This should help you get an initial guess between your base link and kinect. So that you can start making a URDF.

Then you might want to look at this tutorial: How to learn kinematic models of articulated objects only by using a webcam and a laptop. (http://www.ros.org/wiki/articulation_tutorials/Tutorials/ArticulationWebcamDemo)

Which might help you finish making your URDF.

Another approach might be to use the ar_pose package to determine the rough locations of each joint in the camera frame and then use those measurements to construct a URDF.

Unfortunately people typically get out a ruler and just measure these things because it is faster and in general more accurate.

edit flag offensive delete link more

Comments

You might look at ar_kinect instead of ar_pose, as it is far more accurate on the Kinect, and requires no calibration of camera parameters.
fergs gravatar image fergs  ( 2011-06-27 07:53:15 -0600 )edit
3

answered 2011-05-20 00:57:56 -0600

updated 2011-05-20 01:00:23 -0600

To expand on rbtying's answer:

In ROS, transformations between different coordinate frames, such as the optical frame of the Kinect and the base of the robot arm are handled by the tf library. You just have to specify the transform between those two frames and then can use the tf library to compute the transformations between different frames.

Long answer:

First create an URDF file of your arm, upload it to the parameter server (param name "robot_description"), have some node publishing the joint_states of the current arm configuration and run the robot_state_publisher to integrate URDF and joint states into tf messages. Now you should be able to visualize your arm state in RViz.

Next, you have to specify the relative position of your Kinect to the arm. Assuming that this is a fixed transform, you can use the static_transform_publisher. For example, let's take a look at kinect_camera/launch/kinect_with_tf.launch:

<launch>
  <node pkg="tf" type="static_transform_publisher" name="world_to_kinect_rgb" 
    args="0 0 1.10 -1.57 0 -1.57 /world /kinect_rgb 10" />
  <node pkg="tf" type="static_transform_publisher" name="kinect_rgb_to_kinect_depth" 
    args="-0.03 0.0 0.0 0.0 0.0 0.0 /kinect_rgb /kinect_depth 10" />

  <include file="$(find kinect_camera)/launch/kinect_node.launch" />
</launch>

The first node specification gives the transform between the /world frame and the /kinect_rgb frame. Here, you have to replace /world by the base link of your robot arm, let's say /arm_base_link, if that is the name given in your URDF model. To find the correct parameters for that transform, you can use the tf_publisher_gui instead of the static_transform_publisher. Now you should be able to visualize both your robot model and the Kinect point cloud together in RViz. Fiddle around with the parameters to get your robot model lined up with the actual arm visible in the point cloud.

Now, whenever you have a point in the optical frame of the Kinect, you can use tf to just transform that point into the base frame of the robot arm.

edit flag offensive delete link more
1

answered 2011-05-20 04:30:33 -0600

So I'm not personally familiar with this package and I don't know how applicable/reusable it might be on your problem, but it might be a good starting point: pr2_calibration.

edit flag offensive delete link more
1

answered 2011-05-19 13:34:48 -0600

rbtying gravatar image

Try making a URDF model and joint-publisher for your robot arm - the tf library will then take care of whatever transformations you need.

URDF tutorials

edit flag offensive delete link more
0

answered 2011-08-03 17:29:15 -0600

Wim gravatar image

You are describing a very good approach in your question: put markers on the end-effector of the robot, and observe those markers with the kinect camera. It sounds like you have some difficulties with the code to recognize the markers, but that does not make the approach less valuable. We apply the exact same approach when calibrating the PR2 cameras relative to the PR2 arm. We use checkerboard as markers; opencv has good methods to compute the pose of a checkerboard relative to a camera frame.

edit flag offensive delete link more

Comments

I also found out after some time that checkerboard and calibration tools are much better than marker tools like ArToolkitPlus, so I ended up doing something similar to what you are describing
Aslund gravatar image Aslund  ( 2011-08-20 11:39:48 -0600 )edit
0

answered 2011-05-20 01:22:48 -0600

Aslund gravatar image

Thanks for the very informative post. I do not know if I am bad at explaining my problem, but both your and rbtying's answer assume I have a reference/knowledge of where the robot arm and the kinect is according to some reference frame. But the fact is that I do not, I am at the very basics here. I have the camera mounted at some random place and the same goes with the robot arm, no knowledge of their position and orientation to any frame, but it is this knowledge I seek. I have tried to use markers to figure out the relative position and orientation between the camera and the base of the robot arm, but it have proven to be unsuccessful. I therefore try to hear if there exist other ways of obtaining the relative pose between two frames where there exist no prior knowledge of any transformation.

edit flag offensive delete link more

Comments

1
Ok, now I get it. Just a side remark: it's better to use the "comment" function when replying to a particular answer, because over time, the answers will get reordered by relevance, so it will be unclear to whom exactly you are responding.
Martin Günther gravatar image Martin Günther  ( 2011-05-20 20:52:02 -0600 )edit
-1

answered 2011-05-19 23:21:31 -0600

Aslund gravatar image

Thanks for you reply. I just cant see how it solves my problem, the robot arm is itself not a problem, but how to determine the position of the robot arm according to the camera.

edit flag offensive delete link more

Question Tools

1 follower

Stats

Asked: 2011-05-19 10:58:07 -0600

Seen: 3,122 times

Last updated: Aug 03 '11