Data fusion using two (or more) Kinect
Hi community, I am writing about something that I did not manage to find anywhere on internet.
I am trying to achieve fusion of results obtained by more than one Kinect in real time. For example, we can consider the skeleton information (of same person) obtained by two (individual) Kinect devices tracking a person. I want to combine both Skeleton information to remove occlusions and other artifacts that would allow us to achieve 360 degree tracking.
This means the depth information obtained from each Kinect device has to be converted to a global coordinate system which will be independent of the frame of reference of either Kinect devices.
Has anyone thought or worked on this or does it already exists?
Regards Pankaj