Seminar: Automatic and Adaptable Registration of Live RGBD Video Streams Sharing Partial Overlapping
Supervisor: Dr. Oscar Meruvia-Pastor
Automatic and Adaptable Registration of Live RGBD Video Streams Sharing Partial Overlapping Views
Department of Computer Science
Thursday, Sept. 24, 2015, 1:00pm, Room EN 2022
In this thesis, we introduce DeReEs-4V, an algorithm that receives two separate RGBD video streams and automatically produces a unified scene through RGBD registration in a few seconds. The motivation behind the solution presented here is having a multiple depth sensing camera system that requires no calibration. For instance, game players can place the depth-sensing cameras at arbitrary locations to capture any scene where there is some partial overlap between the parts of the scene captured by the sensors.
A typical way to combine partially overlapping views from multiple cameras is through visual calibration using external markers within the field of view of both cameras. Calibration can be time consuming and requires extensive knowledge, thus making it unsuitable for normal users. If the cameras are bumped into or even moved slightly, the calibration process typically needs to be repeated from scratch.
In this research we demonstrate how RGBD registration can be used to automatically find a 3D viewing transformation to match the view of one camera with respect to the other without calibration while the system is running.
To validate this approach, a comparison of our method against standard checkerboard target calibration is provided, with a thorough examination of the system performance under different scenarios. The system presented supports any application that might benefit from a wider operational field-of-view video capture. Our results show that the system is robust to camera movements while simultaneously capturing and registering live point clouds from two depth-sensing cameras.