Assignment 2

Kyle Moore

COMP 870 - Advanced Image Generation
Professor Anselmo Lastra
September 28, 2006


The ray tracer used in this assignment was based on a ray tracer I wrote for a previous class at The Ohio State University. It uses the Coin3D (OpenInventor) file format for describing scenes. It was originally written to run in Unix, but I ported it to Windows because I prefer windowed GUIs to the command line. It is written in C++ and I used Microsoft .NET to make the GUI. It was built in Visual Studio .NET 2003 and VS 2003 should be used to test the code. In all likelihood it will not run in VS 2005 because of the changes to managed C++ in .NET 2.0. However, I have not tested it in 2005.


To achieve soft shadows I assumed every light source was an area light source in the shape of a quad. I also assumed that the scene is near the origin (0, 0, 0) and that all lights directly face the origin. In-other-words, the normal of the light quad always points to the origin. This was done to ensure that lights on the sides of the scene generated soft shadows just as well as lights above the scene. Each light is 2x2 in size. Since their size is constant, the lights cast softer shadows when they get closer to the scene.

Creating the soft shadows simply requires super sampling the light source. Instead of casting one ray to check for shadows, we cast many. Since casting many rays can be costly, I have used an adaptive algorithm to try to lower the number of rays needed. First we cast five rays, one for each corner of the quad and one for the center. If none of these rays collide with an object, then we assume there is no shadow here and continue. If all five rays hit opaque objects, then we assume we are in complete shadows (umbra) and not in the penumbra. In this case we do not need more rays. If neither of these cases occur, then we are in the penumbra. This is where we need the most detail and therefore the most rays. Here we cast 16 more rays in either a 4x4 grid or a jittered grid. We take all the various "shade" values and reconstruct them using either a box or tent filter. Below there is a table that compares the different sampling and reconstruction filters for soft shadows. Jittering seems to add noise but reduce artifacts. The tent filter seems to give a tighter or more focused shadow than the box filter. My personal favorite is the tent filter with jittering.

Soft Shadow Results

Soft Shadows Sampling
Uniform 4x4 Grid Jittered 4x4 Grid
Reconstruction Box Filter
Tent Filter

Depth of Field

To implement Depth of Field, all that is needed is to cast several more rays per pixel. In a pinhole camera, everything is in sharp focus because only one ray of light can pass through the pinhole at a time. Therefore the circle of confusion is the size of a ray. With a lens, multiple rays of light pass through the lens at a time, meaning the circle of confusion increases as we move away from the focal point. My implementation places the image plane at the focal point of the lens. Then, we trace rays from several points on the lens to each place on the image plane. I tried to approximate a circular shape of the lens by sampling it in approximately a circular shape. There are 12 points total in a modified 4x4 jittered grid. If an object is near the focal plane it will appear in focus because all the rays will hit it at approximately the same position. If an object is far from the focal plane, the rays that hit it will be far apart and make the object appear blurry and out of focus. Below are some sample images that demonstrate this effect.

Depth of Field Results

My program allows the user to adjust the focal distance as well as the aperture size. By adjusting the focal distance the user can make different objects appear in or out of focus. Adjusting the aperture size increases the relative focus of the rest of the scene. The smaller the aperture, the more of the scene that will be in focus. This makes sense because a pinhole camera is equivalent to an infinitely small aperture.


During this assignment, I ran into few problems. The main problem I hit was that my adaptive algorithm for anti-aliasing interfered with the depth-of-field calculation. Originally I implemented it such that depth-of-field was only calculated when the algorithm determined anti-aliasing was needed. To fix this, I had to get the anti-aliasing test to consider depth-of-field. This slowed the rendering when anti-aliasing wasn't needed, but corrected the problem.


Here is a link to the code. This should be opened with Visual Studio 2003.

Grading & Comments

Grading information and comments can be sent to .