What is the optimal way to filter laser scans that are taken when the robot is tilted (roll/pitch)
As the title says, I am wondering which is the best way to filter those scans. The reason for this is that my robot is using a slam package designed for 2D environments and my laser is an RP LiDAR S1. When the rboto is moving in the environments, there are moments that the ground has bumps or ditches that make the robot tilt an various directions. This sometimes makes the slam map the ground (and marking it as an obstacle).
I was thinking on transforming the laser scan to a pointcloud (using laser_filter) then filter out the points below (and above) certain threshold, then transforming those points back into a laser scan.
I feel this approach might be convoluted and maybe even computational expensive.
People that have had similar issues, how did you solved it?
Many Thanks!
Seems rgb-d or stereo camera might be better suited for the task, in combination with imu. As I asume you have an imu as otherwise the pointcloud approach wouldnt work, you might just make a node that funnels the laserscans. Then subscribe to imu, and when the imu's tilt/pitch exceeds the threshold stop publishing the output scan and start publishing an empty/fake scan for the time being. Inputscan=outputscan if imu within limits of tilt/pitch, kind of idea. You might even want to filter the scan at this point, as to still publish very near points that are really obstacles inside a ditch. The big question is how mapping handles "missed"/empty scans no matter how you achieve them. Might require 3d mapping after all.
If you intend to operate in a 3D environment why use a 2D-Slam package? Remember, you can start with a 3D package and always get like a costmap (2D) but you cannot scale up (reliably). An imu will give you roll and pitch measurements which you can use to transform your data before adding it to the slam package