next up previous contents
Next: View-dependent geometry approximation Up: Lightfield modeling and rendering Previous: Rendering from recorded images   Contents

Fixed plane approximation

In a first approach, we approximate the scene geometry by a single plane ${\tt L}$ by minimizing the least square error. We map all given camera images onto plane ${\tt L}$ and view it through a virtual camera. This can be achieved by directly mapping the coordinates $x_i, y_i$ of image $i$ onto the virtual camera coordinates $[x_V \, y_V\, 1]^\top = {\bf H}_{iV} [x_i \, y_i\, 1]^\top$. Therefore we can perform a direct look-up into the originally recorded images and determine the radiance by interpolating the recorded neighboring pixel values. This technique is similar to the lightfield approach [78] which implicitly assumes the focal plane as the plane of geometry. Thus to construct a specific view we have to interpolate between neighboring views. Those views give the most support to the color value of a particular pixel whose projection center is close to the viewing ray of this pixel. This is equivalent to the fact that those views whose projected camera centers are close to its image coordinate give the most support to a specified pixel. We restrict the support to the nearest three cameras (see Figure 8.9).
Figure 8.9: Drawing triangles of neighboring projected camera centers and approximating geometry by one plane for the whole scene, for one camera triple or by several planes for one camera triple.
\begin{figure}\centerline{
\psfig{figure=mod/tri.ps,height=9cm}}\end{figure}
We project all camera centers into the virtual image and perform a 2D triangulation. Then the neighboring cameras of a pixel are determined by the corners of the triangle which this pixel belongs to. Each triangle is drawn as a sum of three triangles. For each camera we look up the color values in the original image like described above and multiply them with weight 1 at the corresponding vertex and with weight 0 at both other vertices. In between, the weights are interpolated linearly similar to the Gouraud shading. Within the triangle the sum of weights is 1 at each point. The total image is built up as a mosaic of these triangles. Although this technique assumes a very sparse approximation of geometry, the rendering results show only small ghosting artifacts (see experiments).


next up previous contents
Next: View-dependent geometry approximation Up: Lightfield modeling and rendering Previous: Rendering from recorded images   Contents
Marc Pollefeys 2002-11-22