Large deviations in time when working with Kinect point cloud [closed]
Hey everyone
I have some results that I find a bit strange and I hope someone can figure out why they are happening.
I have a series of segmentations on the point cloud from the Kinect camera but their standard deviation seems abnormally high. I have three steps. First an initial segmentation where I remove points outside a specific range according to the X-Y-Z axis. Next I performed plane segmentation and finally an euclidean clustering. The results are shown in the table below.
in ms | Mean | Std |
Initial segmentation | 188.69 | 46.39 |
Plane segmentation | 2.87 | 3.13 |
Clustering | 894.4 | 144.94 |
I tried to observe the number of points send from the Kinect but it seems pretty stable. So it confuses me why the results are diviating so much.
Ps.
The environment is almost static. The only dynamic element is me infront of the computer but I do not move that much.
How is the actual scene changing over the period you are testing?
Ahh, should properly have added that. There are no significant changes in the environment. It is almost static a part of me infront of the computer.
What are the units, here, milliseconds? I would expect xyz-clipping to take a lot less time than "plane segmentation" is that RANSAC-based plane detection from PCL?
Hey. The values are in mili seconds. The fast plane segmentation is due to the fact that it is performed on a subspace from the points left from the xyz-clipping. So plane segmentation is performed on significantly fewer points. I have an idea the cause is flickering of some sort.