This depends on they type of LIDAR scanner you're using. Does it produce structured point clouds, a 2D grid of points of the same shape and size each time? If this is the case then there should be a one-to-one mapping between the points of any two point clouds. In this case you can simply compare the distance of the new point from the sensor to the distance of the background point, and only include it if it's closer (plus a noise threshold).
If there isn't a one-to-one mapping between scans then your job is slightly harder, because you'll have to work out this mapping point by point and decide what to do if the current scan has points which don't correspond to the background scan. You may even need to interpolate the background scan to generate a suitable matching point.
Processing it as a depth map is simpler and far more efficient than processing it as an unstructured point cloud.
Hope this makes sense.
You could simply filter out points that are not closer than the background scan. The main challenge would be aligning the scans if they're not fixed structured clouds.
hello again! Basically the Scanner is fixed and mounted in the top corner of a room. I could capture a pcd file of the "empty" room ( only fixed objects).
I would like to compare the live point cloud with the fixed cloud and filter out every point in the live cloud which is also in the fixed cloud.
By "filter out" I mean, I want to end up with a pointcloud of only the objects which aren't part of the fixed cloud.
Note: The room has multiple "fixed" obstacles, so a simple ground plane removal would't be enough in this case, hence the background removal approach.