Department of Computer Science
University of North Carolina at Chapel Hill
Email: doums at cs.unc.edu
Fusion4D: Real-time Performance Capture of Challenging Scenes
We contribute a new pipeline for live multi-view performance capture,
generating temporally coherent high-quality reconstructions in
real-time. Our algorithm supports both incremental reconstruction,
improving the surface estimation over time, as well as parameterizing
the nonrigid scene motion. Our approach is highly robust to
both large frame-to-frame motion and topology changes, allowing
us to reconstruct extremely challenging scenes. We demonstrate
advantages over related real-time techniques that either deform an
online generated template or continually fuse depth data nonrigidly
into a single reference model. Finally, we show geometric reconstruction
results on par with offline methods which require orders of
magnitude more processing time and many more RGBD cameras.
3D Scanning Deformable Objects with a Single RGBD Sensor
We present a 3D scanning system for deformable objects
using a single RGBD sensor. Our system allows considerable
amount of nonrigid deformations during scanning and
achieves comparable high quality results. Our system does
not use any prior shape knowledge, enabling general object
scanning with freeform deformations. To deal with the
drift problem when nonrigidly aligning the input sequence,
we automatically detect loop closures, distribute the alignment
error over the loop, and finally use a bundle adjustment
algorithm to optimize for the latent 3D shape and nonrigid
deformation parameters simultaneously.
Temporally Enhanced 3D Capture of Room-sized Dynamic Scene with Commodity Depth Cameras
In this project, we designed a system to capture the enhanced 3D
structure of a room-sized dynamic scene with commodity depth
cameras, such as Microsoft Kinects. Our system incorporates temporal information to
achieve a noise-free and complete 3D capture of the entire room.
More specifically, we pre-scan the static parts of the room offline,
and track their movements online. For the dynamic objects, we perform
non-rigid alignment between frames and accumulate data over
time. Our system also supports the topology changes of the objects
and their interactions.
Scanning and Tracking Dynamic Objects with Commodity Depth Cameras
Exploring High-Level Plane Primitives for Indoor 3D Reconstruction with a Hand-held RGB-D Camera
In this project, we propose to extract high level primitives--planes--from an RGB-D camera,
in addition to low level image features (e.g., SIFT), to better constrain the problem and
help improve indoor 3D reconstruction. More specifically, for frame to frame matching, we
propose a new scheme which takes into account both low-level appearance feature correspondences
in RGB image and high-level plane correspondences in depth image. In addition, in the global bundle adjustment
step, we formulate a novel error measurement that not only takes into account the traditional 3D
point re-projection errors, but also the planar surface alignment errors.
Room-sized Informal Telepresence System
We designed a room-sized telepresence system for informal gatherings rather than conventional meetings.
Unlike conventional systems which constrain participants to sit in fixed positions, our system aims to
facilitate casual conversations between people in two sites. The system consists of a wall of large flat
displays at each of the two sites, showing a panorama of the remote scene, constructed from a multiplicity
of color and depth cameras.
We provided a solution that ameliorates the eye contact problem during conversation in typical scenarios
while still maintaining a consistent view of the entire room for all participants. We achieve this by
using two sets of cameras–a cluster of ”Panorama Cameras” located at the center of the display wall that
are used to capture a panoramic view of the entire room, and a set of ”Personal Cameras” distributed
along the display wall to capture front views of nearby participants. A robust segmentation algorithm
with the assistance of depth cameras and an image synthesis algorithm work together to generate a consistent
view of the entire scene.