next up previous
Next: Immersive, 3D Telecollaboration: the Up: Use in Graphics Projects. Previous: Use in Graphics Projects.

Image-Based Rendering by 3D Warping.

The primary motivation in producing the data described here is to support our image-based rendering project [1]. Many different aspects of IBR are being studied, such as representation, visibility, reconstruction, multiple views, hardware acceleration, and hybrid CG systems, and all require source images to render.

The registered color and range images lead naturally to an image-warping walk-through application that renders them with as few artifacts as possible. If the images were rendered as triangle meshes, errors would occur at silhouettes such as table edges, doorways, and other spatial discontinuities, where the mesh would be stretched across the spatial discontinuities.

One of the first steps to perform is silhouette edge detection. While many sophisticated methods for performing this exist, it turns out that simple heuristics perform nearly as well and are extremely easy to compute. One method computes the dot product of the viewing ray with the normal vector of the triangles in the mesh. Silhouettes (and badly sampled surfaces) will have values close to 0, and thus the mesh can be broken at these locations.

We have developed a simple application [9] that uses OpenGL and runs on our Onyx2 hardware as well as the custom PixelFlow hardware [5]. The user interface allows the user to move around the environment arbitrarily, using multiple panoramic source inputs. The effect is very real--during demonstrations, many people believe that we have either taken photographs from a significant number of positions or are somehow showing a video. Images from a walk-through sequence with two panoramas are shown in figure 2. The performance is near real-time.

Figure 2: Some sample images of a walk-through of our reading room. The input data is composed of panoramas taken from 2 locations and consists of 10 million samples.

Optimizations have been made to improve rendering performance by using the PixelFlow graphics hardware. For instance, it is possible to perform incremental warping calculations where the warping arithmetic that applies to groups of pixels is performed once and only the pixel-specific arithmetic is performed for each pixel. We have also developed a custom rendering primitive called an image tile. We can cull at the image tile level, providing a dramatic improvement in rendering. The rendering of an image tile is also where the incremental arithmetic is performed.

As a further extension, we have also developed a new point primitive for rendering, which we call the Voronoi-region primitive. It is basically a cone-shaped point rather than a flat disc, aimed at the viewer and falling off in z as its radius increases. When several of these primitives are displayed from a planar surface, they implicitly compute the Voronoi regions of the samples. See [9] for full details.

next up previous
Next: Immersive, 3D Telecollaboration: the Up: Use in Graphics Projects. Previous: Use in Graphics Projects.
Lars S. Nyland