ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
0

Limiting depth kinect sees in rgbdslam?

asked 2011-04-21 06:38:05 -0600

Shark gravatar image

updated 2014-01-28 17:09:34 -0600

ngrennan gravatar image

Is there any way I can limit the depth to which I record data in rgbdslam? What I mean by this is, I don't want to accumulate points that are far away from the camera.

My main goal is to use rgbdslam to scan objects, but it involves a lot of human post-processing to isolate the object and the file sizes are very big. It would be nice if there was a way I could reduce the human element and reduce the size of these point clouds altogether.

Is there any way this can be done while recording data online? Or is the only way to do this is to scan the pcd file after wards and remove points that are far away (have a high (x^2 + y^2 + z^2)^.5 value).

edit retag flag offensive close merge delete

6 Answers

Sort by ยป oldest newest most voted
1

answered 2011-04-22 16:24:56 -0600

fergs gravatar image

updated 2011-04-22 16:26:42 -0600

How about adding a pcl_ros/PassThrough nodelet? I haven't used the RGBDslam since it first came out, but I would think you should be able to remap the input topic of RGBDslam to the output of the PassThrough filter, which can be setup to limit the Z-height (depth) of the cloud.

There is a very brief tutorial on PassThrough filter on the wiki: http://www.ros.org/wiki/pcl_ros/Tutorials/PassThrough%20filtering

edit flag offensive delete link more
1

answered 2011-04-23 15:44:29 -0600

Shark gravatar image

Thanks Raph and Fergs, I'll look at those and see what happens.

I just updated to the most recent version of rgbdslam and I noticed there is already a "depth limiting" parameter in the new GUI. Is this exactly what I'm looking for?

-Shark

edit flag offensive delete link more
0

answered 2011-04-22 11:59:00 -0600

Shark gravatar image

That's kind of unfamiliar territory for me. Are there any specific tutorials that would be best for learning something like this?

edit flag offensive delete link more
0

answered 2011-04-23 00:09:13 -0600

raphael favier gravatar image

updated 2011-04-23 00:14:58 -0600

Hello Shark,

You can use a pass-through filter for that.

Check the nodelets in pcl, I remember using one for that. You can find it in the filters folder of pcl_ros.

You can use it to filer out point that are not between distance A and B on a particular axis.If you wish to filter on more that one axis, simply pipe several nodelets, each parametrized on a different axis.

If you feel more like doing it in a normal node, check this complete tutorial here.

Hope it helps

Raph

edit flag offensive delete link more
0

answered 2011-04-25 23:50:00 -0600

updated 2011-04-26 00:42:36 -0600

I just updated to the most recent version of rgbdslam and I noticed there is already a "depth limiting" parameter in the new GUI. Is this exactly what I'm looking for?

Depends. It does prevent the storage of far points in the combined point cloud file. It doesn't prevent capturing those points and using them for feature matching, though.

edit flag offensive delete link more
0

answered 2011-04-21 07:46:10 -0600

Mac gravatar image

There may be a better (read: faster) way to do this, but:

You could write a republishing node that sits between your kinect driver and the rgbdslam node; it listens for pointclouds, scrubs out the points you don't want, and republishes them. Some careful use of <remap> tags in your launchfile would make this pretty easy to plumb together.

edit flag offensive delete link more

Question Tools

1 follower

Stats

Asked: 2011-04-21 06:38:05 -0600

Seen: 560 times

Last updated: Apr 26 '11