ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
0

Obstacles in camera blind spot are cleared from local costmap

asked 2021-03-30 06:09:20 -0600

jannkar gravatar image

I'm trying to implement obstacle avoidance using move_base package on ROS 1 Melodic. Current issue is that when the robot approaches a short obstacle, the obstacle is first correctly drawn to local costmap, but closer the robot gets to it, it starts to clear it from the costmap. I believe this happens because of the fact that the camera has a blind spot (~30cm) in front of the robot. When the camera doesn't see the obstacle anymore, move_base obstacle layer does raytracing over it to the next nearest obstacle and clears the closest one from the costmap. Desired behavior would be that the robot doesn't remove the obstacle from local costmap even when it is in the blind spot.

So the question is, is there a way to avoid this from happening? I believe one solution would be to have minimum distance for the raytracing (== 30cm, length of the blind spot), under which raytracing is not done, but move_base currently doesn't seem to have this sort of parameter.

I'm using Intel D435i depth camera which publishes a pointcloud from short distance and a laser scan from longer distance. This is how the obstacles layer looks like:

obstacles_laser:
    observation_sources: camera_1_obstacles camera_1_laser
    camera_1_laser: {data_type: LaserScan, clearing: true, marking: true, topic: /camera_1/laser_scan, obstacle_range: 5.5, raytrace_range: 6.0 }
    camera_1_obstacles: {data_type: PointCloud2, clearing: true, marking: true, topic: /camera_1/obstacles_cloud, obstacle_range: 5.5, raytrace_range: 6.0}

Robot approaching a short obstacle:

image description


Robot has moved close enough for the obstacle to disappear from the local costmap

image description


Visualization of the blind spot of the Intel camera. Visible area marked with orange points.

image description

edit retag flag offensive close merge delete

2 Answers

Sort by ยป oldest newest most voted
1

answered 2021-03-30 13:32:37 -0600

I know this doesn't help you for ROS1, but we added minimum obstacle and raycasting ranges for ROS2 Nav2's costmap_2d package https://navigation.ros.org/configurat... such that you can stop clearing in the sensor's deadzone.

edit flag offensive delete link more

Comments

Nice! We will switch to ROS2 in the near future, so it is great to hear that this feature is already supported. So maybe I'll try to come up with some quick workaround while using ROS1. Thanks!

jannkar gravatar image jannkar  ( 2021-03-31 00:24:50 -0600 )edit
0

answered 2022-08-15 05:53:56 -0600

ahopsu gravatar image

Hi! I have this exact same problem with the exact same camera model. I'm already using ROS2 Galactic's version of Nav2, which has the minimum raycast distance feature in Costmap 2D. Using the particular parameter kind of "works", but the problem is, that if dynamic obstacle (e.g. people) walks in front of the camera below the minimum raycast distance, it leaves permanent obstacle in the costmap, as the minimum raycast distance is (for my understanding) defined by straight/radius distance in all directions from sensor's origo.

For example, on jannkar's case, the blind spot area is 30 cm on the ground level. However, if somebody waves his hand at the height of the sensor (on the x-axis of the sensor), the camera is able too se much closer (only few centimeters ahead from camera). This means, that waving hand e.g. 10 cm ahead of camera will leave permanent obstacle on the costmap. The robot should basically reverse at least 30 cm to clear the view, which in my case is not the best option, as we don't have sensor at the back of the robot. Also, clearing just the costmap is also not option, as we lose the information of the obstacles which camera can't see anymore.

I have tried to use also Voxel Layer and Spatio-Temporal Voxel Layer to solve the issue. The problem of Spatio-Temporal Voxel Layer is, that it does not seem to have raycasting feature at all, meaning, that the obstacles are just removed from the costmap based on elapsed time, which again does not fit for our application.

The Voxel Layer has raycast feature, but based on my testings, it barely removes any obstacles at all, even if the camera should clearly see that there is "nothing" between the sensor and floor/wall. I don't know do I just have some poor parameter settings (like low resolution of the camera: I filter the pointcloud data very much, so that it is not very dense, as it reduces much of data [1 Gbit/s -> 10 Mbit/s]. However, the resolution of the final pointcloud should be still better than voxel layer's/costmap's resolution).

So I'm a bit "stuck" how to solve the problem. My guess is, that theoretically the best plugin is to use the Voxel Layer, which I should continue testing even more. But I have understood (based on stevemacenski's writings on different sites) that the Voxel Layer is not the best working layer (in perfomance point of view), so let's see how much effort is reasonable to put on trying to make it work properly.

edit flag offensive delete link more

Comments

Hi! We have been struggling with similar problems when working with these Intel cameras. In the specs, the minimum distance is ~28 cm which should apply also in front of the camera. But we have also noticed that it sometimes registers values below that, which however seem to be quite unreliable. On top of that, if you cover the camera completely with a hand or with an object, it will publish an empty pointcloud, indicating that there are no obstacles in front of it all. This makes handling the costmap logic even more difficult.

Our current workaround is to use STVL with decay time and as you mentioned, this is not the optimal solution but has worked for us so far. But I would also be excited to hear about a proper solution to this.

jannkar gravatar image jannkar  ( 2022-08-16 10:38:26 -0600 )edit

Hello again! Today we actually managed to reach pretty working solution with the Voxel Layer. The problem initially was, that the Voxel Layer barely removed any obstacles, meaning that there was something wrong with the raycasting feature.

We initially used more or less the same parameter setup as in this example here: Configuring Costmap

In the link, pointcloud observation source under the voxel_layer was configured in a way, that the clearing and marking was enabled in same observation source. The raycasting didn't really this way.

However, we managed to get raycasting work pretty well by using the marking and clearing features in separate observation sources. E.g., there was two "duplicates" of the same pointcloud observation source, but in other one, the clearing was true, and marking false. For the other observation source, clearing was false, and marking true.

ahopsu gravatar image ahopsu  ( 2022-08-16 17:30:27 -0600 )edit

Also, for the clearing observation source, we put the min_obstacle_height much lower than what it was set for the marking observation source. And we set the origin_z to same height as min_obstacle_height for marking source. For example:

voxel_layer:
  origin_z: 0.1
  pointcloud1:
    marking: true
    clearing: false
    min_obstacle_height: 0.1
  pointcloud2:
    marking: false
    clearing: true
    min_obstacle_height: -1.0

This way, detecting and removing the obstacles seemed to work pretty well only in the area, where the sensor actually sees, and no obstacle were removed on the blind spot between the robot and the distance where the camera sees the floor.

ahopsu gravatar image ahopsu  ( 2022-08-16 17:36:42 -0600 )edit

Glad to hear that you found a solution that works for you! And thanks for posting it, I'm sure it will be useful for others as well.

jannkar gravatar image jannkar  ( 2022-08-17 03:59:25 -0600 )edit

Question Tools

1 follower

Stats

Asked: 2021-03-30 06:09:20 -0600

Seen: 809 times

Last updated: Aug 15 '22