Citation Link: https://nbn-resolving.org/urn:nbn:de:hbz:467-13972
Real-time processing of range data focusing on environment reconstruction
Source Type
Doctoral Thesis
Author
Issue Date
2017
Abstract
With the availability of affordable range imaging sensors, providing real-time three-dimensional information
of the captured scene, new types of Computer Vision applications arise. Such applications
range from designing new Human-Computer interfaces (known as Natural User Interfaces) to the
generation of highly detailed reconstructions of complex scenes (for example to keep track of cultural
heritage or crime scenes), to autonomous driving and augmented reality.
These depth sensors are mostly based on two efficient technologies: the structured-light principle
(such as the Xbox 360 version of the Kinect camera) and the time-of-flight (ToF) principle (as cameras
implemented by pmdtechonologies). When ToF cameras measure the time until the light emitted
by their illumination unit is backscattered to their smart detectors, the structured-light cameras
project a known light pattern onto the scene and measure the amount of distortion between the
emitted light pattern and its image. Both technologies have their own advantages and weak points.
This dissertation is composed of 4 contributions. First, an efficient approach is proposed to compensate
motion artifact of ToF raw images. Thereafter, a work on online three-dimensional reconstruction
application has been investigated to improve the robustness of the camera tracker by segmenting
moving objects. The second major contribution lies on a robust handling of noise on raw data, during
the full reconstruction pipeline, proposing a new type of information fusion which considered
the anisotropic nature of noise present on depth data, leading to faster convergence of high-quality
reconstructions. Finally, a new method has been designed which uses surface curvature information
to robustly reconstruct fine structures of small objects, as well as limiting the total error of camera
drift.
of the captured scene, new types of Computer Vision applications arise. Such applications
range from designing new Human-Computer interfaces (known as Natural User Interfaces) to the
generation of highly detailed reconstructions of complex scenes (for example to keep track of cultural
heritage or crime scenes), to autonomous driving and augmented reality.
These depth sensors are mostly based on two efficient technologies: the structured-light principle
(such as the Xbox 360 version of the Kinect camera) and the time-of-flight (ToF) principle (as cameras
implemented by pmdtechonologies). When ToF cameras measure the time until the light emitted
by their illumination unit is backscattered to their smart detectors, the structured-light cameras
project a known light pattern onto the scene and measure the amount of distortion between the
emitted light pattern and its image. Both technologies have their own advantages and weak points.
This dissertation is composed of 4 contributions. First, an efficient approach is proposed to compensate
motion artifact of ToF raw images. Thereafter, a work on online three-dimensional reconstruction
application has been investigated to improve the robustness of the camera tracker by segmenting
moving objects. The second major contribution lies on a robust handling of noise on raw data, during
the full reconstruction pipeline, proposing a new type of information fusion which considered
the anisotropic nature of noise present on depth data, leading to faster convergence of high-quality
reconstructions. Finally, a new method has been designed which uses surface curvature information
to robustly reconstruct fine structures of small objects, as well as limiting the total error of camera
drift.
File(s)![Thumbnail Image]()
Loading...
Name
Dissertation_Damien_Lefloch.pdf
Size
37.8 MB
Format
Adobe PDF
Checksum
(MD5):15070bdb9d2d4133ec001751e44dd3b0
Owning collection