Assignment 3 - Lightfield Viewer

Generator

I wrote a program to create data sets for the lightfield viewer.  I wanted to be able to generate the data quickly in order to minimize the time in the debug cycle. I decided to use OpenGL. The generator program allows the user to load a model from an OBJ file and interactively place it in the scene. The user can also manipulate the size and position of the st and uv planes. I have adopted the convention used in the Lumigraph paper [1] of placing the st plane closest to the viewer.

I found it useful to be able to place the camera at specific sample locations on the st plane. This allowed me to properly size the model such that it fit entirely in the view of all the sample locations.

When the scene parameters have been specified the program generates the dataset. The program can prefilter the lightfield samples as described in the Lightfield paper [2] by integrating over the entire sample domain instead of just taking point samples. I do this by supersampling with jittered samples. Sampling of the uv plane is similar to jittered anti-aliasing. The uv plane is offset by a small random amount. Sampling of the st plane is accomplished by shifting the camera position around within the the sample domain as is done for simulating depth of field. I use the accumulation buffer to add all of the samples together. This is a relative quick process. I can generate 16x16 images at 256x256 each with 16 subsamples in about 2 minutes.

For this assignment I generated two datasets to experiment with depth correction. The first dataset has the model embedded in uv plane. The second dataset places the model partway between the two planes. I got the best results for this dataset when I did not prefilter the st plane. The depth of field was too great and produced very blurry samples.
 
 

Object embedded in uv plane

Object between st and uv planes


 

Viewer

The lightfield viewer also uses OpenGL. The rendering is done with texture mapping. The user can interactively change the plane orientations. The graphics hardware makes it possible to easily achieve 60 fps.

There are four possible basis functions that can be used to reconstruct a view from the lightfield. The constant basis functions just use the nearest samples in both the st and uv planes. This is accomplished by texturing non-overlapping quads on the st plane with GL_NEAREST to as a texture filter. Images rendered with this basis are pixelated and have sharp discontinuities between the quads. The consant-bilinear basis functions perform bilinear interpolation in the uv plane. This works the same as with the constant basis functions except that GL_LINEAR is used instead as the texture filter. Bilinear interpolation on the uv plane gets rid of pixelation but the discontinuities remain. The linear-bilinear basis uses overlapping hexagon tent functions. The opacity of the texture falls off from the center toward the edges. This basis yields the best results.

The fourth basis function is quadrilinear. Graphics hardware was not capable of performing the operations necessary to create the quadrilinear basis function at the time of the writing of the Lumigraph paper. It is trivial to implement on today's programmable hardware. I attempted to do this first using Cg and then with the GeForce3 register combiners. I got both implementations to work in isolated test programs but had problems integrating them with the viewer. I kept having problems with heap allocations/deallocations. This is a problem that crops up sometimes when using DLLs on Win32 systems. Each DLL has its own heap and if you try to deallocate something in one DLL that was allocated in another, chaos ensues. I have checked to make sure that all of the DLLs that I am using have been compiled against the same C Runtime Libraries but to no avail. I can't figure out what is wrong. This was a source of many lost hours and much frustration.

Here are some images taken from the viewer using the dataset with the model in between the st and uv planes. These first three show the lightfield without depth correction.
 
 

a)

b)

c)

Lightfield without depth correction. Note the discontinuities and ghosting effects.  a) Constant basis, b) Constant basis showing quads, c) Linear-bilinear basis
 

 

The discontinuities at quad boundaries are quite pronounced with the constant basis functions. With the linear-bilinear basis the discontinuities are replaces with ghosting artifacts. By bringing the depth correction plane to about the center of the model, the image quality is much improved. These images show the lightfield brought into focus. The focus isn't perfect. There are parts of the model that remain out of focus. There is just too much depth in the model to bring the whole thing into focus at once.
 
 
 

a)

b)

c)

Lightfield with depth correction. Some discontinuities still remain.  a) Constant basis, b) constant-linear basis, c) Linear-bilinear basis.
 

 

The lightfield for the first dataset with the model embedded in the uv plane does not need much depth correction. It was created in-focus.
 
 

Lightfield with object in uv plane





This is what happens when the depth correction plane gets too close!
 
 



Code

The Generator and Viewer programs were written using Qt and QGLVU. I have included the skeleton model which I downloaded from 3DCafe. The zip file includes both executables and code for the Viewer and Generator. Generator will only run with a GeForce3 video card because it makes use of pbuffers. I have also included a pregenerated dataset.

lightfield.zip (990 KB)
skeleton.zip (4.4 MB)
 

Generator Controls

<Shift + Left Button>
<Ctrl  + Left Button>
<Alt   + Left Button>
Rotate
Translate
Scale

Similarly, using the right mouse button transforms the planes.
 
 
 

References

[1] S. Gortler, R. Grzeszczuk, R. Szeliski, and M. Cohen. The Lumigraph, Proceedings of SIGGRAPH '96, pp 43-54, 1996.

[2] M. Levoy and P. Hanrahan. Light Field Rendering, Proceedings of SIGGRAPH '96, pp 31-42, 1996.