UNC Image-Based Rendering
Overview People Publications Projects Internal Links
News
Our team, in collaboration with David Luebke and students at the University of Virginia, have created a "Virtual Monticello" for the Jefferson's America & Napoleon's France exhibition at the New Orleans Museum of Art.  The museum built a 55-foot-wide facade of Monticello that includes two windows onto which we rear project a stereo view of Mr. Jefferson's library (image at right).  Museum visitors wear polarized glasses, and one visitor is tracked to provide the viewpoint.

The 3D model was created with the 3rdTech DeltaSphere laser scanner, a commercial version of a scanner that was originally designed at UNC as part of this project.  The scanner captures very accurate and dense range samples.  These, combined with color imagery, are used to create a simplified 3D mesh.  The image to the right is an example of the renderings from the model.

We also collaborated with (art)n to create a stereogram that enables visitors to see Mr. Jefferson's Cabinet in stereo without wearing any glasses.

Follow this link to a web page with more details.

UNC faculty contacts:  Anselmo Lastra and Lars Nyland.
UNC Students: Chad Hantak, Kok-Lim Low, and Nathaniel Williams.
UNC Staff: Kurtis Keller and John Thomas.
UVa reseachers: Professor David Luebke, Rui Wang, and Cliff Woolley.
Artist: Ben Cloward.
This outreach effort was supported by National Science Foundation grant number ACI-0205425. Additional support from Mitsubishi Electric Research Laboratories.

 


Image-Based Rendering Project Overview
In the pursuit of photo-realism in conventional polygon-based computer graphics, models have become so complex that most of the polygons are smaller than one pixel in the final image. At the same time, graphics hardware systems at the very high end are becoming capable of rendering, at interactive rates, nearly as many triangles per frame as there are pixels on the screen. Formerly, when models were simple and the triangle primitives were large, the ability to specify large, connected regions with only three points was a considerable efficiency in storage and computation. Now that models contain nearly as many primitives as pixels in the final image, we should rethink the use of geometric primitives to describe complex environments.

We are investigating an alternative approach that represents complex 3D environments with sets of images. These images include information describing the depth of each pixel along with the color and other properties. We have developed algorithms for processing these depth-enhanced images to produce new images from viewpoints that were not included in the original image set. Thus, using a finite set of source images, we can produce new images from arbitrary viewpoints.






Impact
The potential impact of using images to represent complex 3D environments includes:

  • Naturally "photo-realistic" rendering, because the source data are photos. This will allow immersive 3D environments to be constructed for real places, enabling a new class of applications in entertainment, virtual tourism, telemedicine, telecollaboration, and teleoperation.
  • Computation proportional to the number of output pixels rather than to the number of geometric primitives as in conventional graphics. This should allow implementation of systems that produce high-quality, 3D imagery with much less hardware than used in the current high-performance graphics systems.
  • A hybrid with a conventional graphics system. A process we call "post-rendering warping" allows the rendering rate and latency to be decoupled from the user's changing viewpoint. Just as the frame buffer decoupled screen refresh from image update, post-rendering warping decouples image update from viewpoint update. We expect that this approach will enable immersive 3D systems to be implemented over long distance networks and broadcast media , using inexpensive image warpers to interface to the network and to increase interactivity.

Research Challenges
There are many challenges to overcome before the potential advantages of this new approach to computer graphics are fully realized.

  • Real-world data acquisition—We are developing algorithms and building sensors for acquiring the image and depth data required as input to the method. one, the DeltaSphere 3000 is available commercially.
  • Compositing multiple source images to produce a single output image—As the desired viewpoint moves away from the point at which a source image was taken, various artifacts appear in the output image. These artifacts result from exposure of areas that were not visible in the source image. By combining multiple source images we can fill in the previously invisible regions.
  • Hardware architecture for efficient image-based rendering—Design of current graphics hardware has been driven entirely by the processing demands of conventional triangle-based graphics. We believe that very simple hardware may allow for real-time rendering using this new paradigm.

   
Range Image Registration by Empty-Space ConsistencyReal-Time Image-Based Rendering of Real World Environments

Research Highlights
This image-based rendering research is the latest step in a twenty-year history of developing custom computer graphics hardware systems at the leading edge of rendering performance. The Pixel-Planes series of machines started in the early 1980s and culminated in Pixel-Planes 5 in 1991. This system was for several years the fastest graphics engine anywhere. The PixelFlow system, built in collaborations with Hewlett-Packard, set a record in rendering performance and image quality.


Research Sponsors

NSF   

This work is supported by National Science Foundation, grant number ACI-0205425.

Previous support from the Defense Advanced Research Projects Agency, order number E278, and the National Science Foundation, grant number MIP-9612643. Significant additional support has been provided by the Intel and Hewlett Packard Corporations.
 


Maintained by: pxplprob@cs.unc.edu
Last updated: 4/5/03

Department of Computer Science
Sitterson Hall, Chapel Hill, NC 27599-3175
University of North Carolina at Chapel Hill 919-962-1758