ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
0

Add mobil object in collision_map to arm_navigation

asked 2012-07-05 21:48:22 -0600

jep31 gravatar image

updated 2016-10-24 09:02:19 -0600

ngrennan gravatar image

Hi everybody,

Situation I use the package arm_navigation with ros-electric to control a arm robot with seven degree of freedom. I can send a articular or Cartesian goal, the robot moves well and avoid the object from urdf file like the table where is the robot. I use the "robot_name"_arm_navigation.launch and a robot simulation which receive joint state y publish TF of urdf with the good position.

Problematic I would use the kinect to add man body in the scene and more particularly at the collision map for the robot can avoid the body. I want to actualize it everytime. I tried three solutions. I want add the body as several cylinders which moves in "real time".

Solution 1: I followed the tutorial Adding known objects to the collision environment to add cylinder from kinect information. In this code I just add in the topic collision_object 10 cylinders to modeling the body.

Results 1: I succeeded to add correctly all cylinder for each body part. For example a arm it's modeling by a cylinder. The robot avoids well the body but just at the beginning of his movement. During the trajectory cylinders doesn't move and I can't check collisions.

Question 1: How actualize the collision map for see cylinders moves ?

Solution 2: It's a similar solution with the tutorial Adding Virtual Objects to the Planning Scene. The difference it's only that I don't use a topic to add objects but a service. I don't see others differences with the solution 1.

Result 2: With this solution cylinders can move everytime. There isn't the same problem of solution 1. But There is a more important problem with it. This solution works only when I choose a coordinate system already present in the urdf and doesn't work with a coodinate system generated by Kinect like right_arm_1. I explain me, when I add a cylinder, I add in a coordinate system(TF) and in this case it's more easy to add cylinder directly in coordinate system of the body. And with right_arm_1 or openni_depth_frame as coordinate system the cylinders doesn't fix in this coordinate system but works perfectly with coordinate system created from urdf file. I added a link between a urdf coordinate system and openni_depth_frame but it's like if there isn't this link.

Question 2: Why the environment_server doesn't understand the new TF of Kinect system ? How fix this ?

Solution 3: We thought to add the body man directly in the urdf and modify directly the tf with a publication in /tf.

Result3: it doesn't work because the simulation publish every time all urdf tf by /robot_st_pub. Only we can modify joint state. And it's more complicated to calculate joint state of the boy.

Question 3: It's possible to use urdf to add and modify mobil object ?

thank you for your help. if I should add details ask me.

EDIT1: Hi, Thanks for ... (more)

edit retag flag offensive close merge delete

Comments

Do you just want to stop your arm motion in case a collision between arm and human is detected or do you want to stop the motion and plan a new trajectory around the human?

bit-pirate gravatar image bit-pirate  ( 2012-07-10 22:28:40 -0600 )edit

PS: Please add comments to your question (and other people's answers) instead of answers.

bit-pirate gravatar image bit-pirate  ( 2012-07-10 22:29:50 -0600 )edit

At least I would to stop the arm motion when there is a collision between arm and human. Then I could to plan a new trajectory with the same process as the beginning. With a know object the problem is that the collision environment doesn't change during the motion so the check validity is useless.

jep31 gravatar image jep31  ( 2012-07-11 20:15:10 -0600 )edit

And with the virtual object, I can't attach objects at coordinate system of human. It's essential for me to use each kinect coordinate system of human body otherwise it's so difficult to find the cylinder position. A example of my code is at the bottom.

jep31 gravatar image jep31  ( 2012-07-11 20:17:56 -0600 )edit

I can't comment my answer or other answer without comment and I don't understand why I should edit my post and no answer

jep31 gravatar image jep31  ( 2012-07-15 21:36:54 -0600 )edit

4 Answers

Sort by ยป oldest newest most voted
2

answered 2012-07-10 06:23:31 -0600

egiljones gravatar image

Ah - now I understand. That's actually pretty straightforward - what I would do is combine the tutorials for adding a virtual object and checking state validity. Adding the virtual object in the planning_scene_diff portion of the planning scene message means that a planning scene will be returned to you that contains the new object. You then create a CollisionModels class and call setPlanningScene with the returned planning scene. The instance of CollisionModels will now contain the object at whatever position you've specified in the planning_scene_diff, and you can make collision checks for individual states or the entire trajectory as you like.

Note that you should use the service /environment_server/get_planning_scene as opposed to /environment_server/set_planning_scene_diff if you care about efficiency - the former will just return to you a merged planning scene, whereas the later will also forward the planning scene to all the other arm navigation components, which takes additional time.

You first call the get_pl

edit flag offensive delete link more
0

answered 2012-07-09 06:37:09 -0600

egiljones gravatar image

I don't really understand what you are trying to do, but arm_navigation in no way supports objects that are moving at the same time the robot is. You'd need to do a great deal of coding to add that support. All the planning algorithms assume that the world is static for the robot. You can add a static object, and move that object around, but all the arm navigation components are going to plan based on the fact that the object is static.

edit flag offensive delete link more
0

answered 2012-07-11 23:06:43 -0600

jep31 gravatar image

Hi Marcus,

For the static transform, already there is:

<node pkg="tf" type="static_transform_publisher" name="kinect_lwr_link_joint" args="0 0 0 0 0 0 kinect_link openni_depth_frame 100" />

So I don't understand why this link doesn't understand by the environment server... Moreover this link works well with the add know object. My karma is less than 20 so I can't send the frames.pdf but with:

$rosrun tf view_frames

All frames are connected...

I try now to construct a collision_map. Tracker has benefits like superior speed but I should trying both. My problem with pointCloud is the check validity which return too often a collision where there isn't collision. I use octomap_server to generate the collision_map. And the same code as at the bottom to detect collision. it seems that when there is a collision once, the state return always a collision.

edit flag offensive delete link more

Comments

Hmmm, I think I need to try this myself, before I can be of better help. PS: Please don't comment with answers, but instead use comments or edit your question. That's probably why people "downvoted" your answers below.

bit-pirate gravatar image bit-pirate  ( 2012-07-11 23:27:12 -0600 )edit

I can't post a comment on your answer. Thanks for your answer

jep31 gravatar image jep31  ( 2012-07-12 01:30:57 -0600 )edit
0

answered 2012-07-11 20:33:09 -0600

bit-pirate gravatar image

Regarding your comments:

I haven't tested it, but did you try using a collision map? As far as I understand this is the way how to feed sensor information into the environment server. This server in turn checks for collisions. So, try building a collision map with the point cloud of your Kinect. The environment server should then be able to recognise collision between points in the collision map and the robot.

Secondly, about adding virtual objects:

I haven't done something like that in combination with arm navigation, but in a similar context.

If you have your virtual objects given in the coordinate system of the Kinect and you want to add them in the coordinate system of the robot (or your world frame), try using a static transformer, which establishes a relation between your reference frame of the robot (or world) (e.g. base_link or world/odom) and Kinect's reference frame (camera_depth_link?).

I hope this helps.

edit flag offensive delete link more

Question Tools

1 follower

Stats

Asked: 2012-07-05 21:48:22 -0600

Seen: 481 times

Last updated: Jul 15 '12