News Our team, in collaboration with David Luebke and students at the University of Virginia, have created a "Virtual Monticello" for the Jefferson's America & Napoleon's France exhibition at the New Orleans Museum of Art. The museum built a 55-foot-wide facade of Monticello that includes two windows onto which we rear project a stereo view of Mr. Jefferson's library (image at right). Museum visitors wear polarized glasses, and one visitor is tracked to provide the viewpoint. The 3D model was created with the 3rdTech DeltaSphere laser scanner, a commercial version of a scanner that was originally designed at UNC as part of this project. The scanner captures very accurate and dense range samples. These, combined with color imagery, are used to create a simplified 3D mesh. The image to the right is an example of the renderings from the model. We also collaborated with (art)n to create a stereogram that enables visitors to see Mr. Jefferson's Cabinet in stereo without wearing any glasses. Follow this link to a web page with more details. UNC faculty contacts: Anselmo
Lastra and Lars Nyland. |
Image-Based Rendering Project Overview In the pursuit of photo-realism in conventional polygon-based computer graphics, models have become so complex that most of the polygons are smaller than one pixel in the final image. At the same time, graphics hardware systems at the very high end are becoming capable of rendering, at interactive rates, nearly as many triangles per frame as there are pixels on the screen. Formerly, when models were simple and the triangle primitives were large, the ability to specify large, connected regions with only three points was a considerable efficiency in storage and computation. Now that models contain nearly as many primitives as pixels in the final image, we should rethink the use of geometric primitives to describe complex environments. We are investigating an alternative approach that represents complex 3D environments with sets of images. These images include information describing the depth of each pixel along with the color and other properties. We have developed algorithms for processing these depth-enhanced images to produce new images from viewpoints that were not included in the original image set. Thus, using a finite set of source images, we can produce new images from arbitrary viewpoints. |
|
Impact The potential impact of using images to represent complex 3D environments includes:
|
Research Challenges There are many challenges to overcome before the potential advantages of this new approach to computer graphics are fully realized.
|
|
Research Sponsors
This work is supported by National Science Foundation, grant number ACI-0205425. |
Previous support from the Defense Advanced Research Projects Agency,
order number E278, and the National Science Foundation, grant
number MIP-9612643.
Significant additional support has been provided by the Intel and
Hewlett Packard Corporations.
Maintained by: pxplprob@cs.unc.edu
Last updated: 4/5/03
Department of Computer Science
Sitterson Hall, Chapel Hill, NC 27599-3175
University of North Carolina at Chapel Hill 919-962-1758