Update  October 15
Perspective Shadow Maps
I decided to implement perspective shadow maps in a simple environment
to see if they would be a viable solution to the shadowing problem. The
idea of the approach is to compute the shadow map in the camera's postperspective
space. This distorts all the geometry, enlarging the objects close to the
camera and shrinking those farther away. The benefit of a perspective shadow
map is that the samples are allocated better. More samples are used for
the areas close to the camera because of the perspective distortion.
Results
In order to better visualize the difference in sampling quality I used
a very low resolution shadow map (128x128). The cameralight configuration
that yields optimal results is a directional light perpendicular to the
viewing direction. In Figure 1 I approximated this configuration with a
point light source placed high above the scene. You can see that for this
configuration the perspective shadow map does indeed yield a higher sampling
rate for the shadow at the base fo the teapot.





Figure 1: A comparison of a normal shadow map (above)
and a perspective shadow map (below) for the optimal configuration of an
overhead directional light. The images on the right are from the point
of view of the light source and show the warping by the perspective transform. 


For other views, the additional benefit of the perspective shadow map
is minimal. As the light moves away from an overhead position, the quality
of the perspective shadow map degrades to that of a normal shadow map as
can be seen in Figure 2



Figure 2: There is not much difference in the sampling
rate for normal shadow maps (left) and perspective shadowmaps (right) for
some configurations. Note that the perspective shadowmap is slightly rotated
which breaks up the straight lines seen with the normal shadow map. 


Objects lying behind the view point may cast shadows on the visible
objects. When an object that lies behind the viewer, it gets inverted and
projected beyond the infinity plane by a perspective transformation. The
handle avoid this situation, a "virtual" camera is used, which has been
shifted back until all possible occluding geometry lies in front of the
viewer. This is faciliated by constructing a convex polyhedron that bounds
possible occluders as shown in Figure 3. The authors of the paper say to
shift the camera back until the polyhedron lies within the view frustum.
I don't think that this necessary to avoid the inverted projection problem,
but it does keep the geometry bounded to the unit cube in postperspective
space, which is important for maintaining a small field of view when constructing
the shadow map.
Figure 3: The polyhedron containing all possible occluding geometry.
The camera frusta are shown in blue. The "virtual" camera is shifted back
until this polyhedron lies completely inside the view frustum. The light
frustum is yellow. 
Problems
The virtual camera shift keeps the geometry from being inverted but the
light source itself may still lie behind the viewer. In this case, the
light will be inverted (see Figure 4). Theoretically this should not produce
any difficulties. We just invert the logic of the shadow map, storing the
furthest point from the inverted light and inverting the depth comparison
to greater than instead of less than. I have had some problems with it
though. I find that the accuracy drops drastically. I have to use a much
higher bias to avoid surface selfshadowing artifacts. The bias has to
be so high, in fact, that the shadow drops out it in places as shown in
Figure 5. I have found it difficult, in general, to select a good bias
that works for all situations. This is probably because of the warping
introduced by the perspective transform. The loss in accuracy may be cause
by the geometry ending up closer to light frustum's far clipping plane
where there is lower accuracy in the zbuffer. A possible solution is use
a 1D texture to transform the hyperbolic postperspective Z values into
linear Z values before storing them in the shadow map.
Figure 4: Inversion a light source in postperspective space. As
the camera moves forward and the light source falls behind the eye, it
is projected onto the infinity plane and inverted. The objects are dark
because the perspective transform flips the normals. 
Figure 5: Bias problems when the light is inverted. The images from
left to right have bias values of 0.0, 0.10, and 0.22. In the middle image,
the shadow of the lid handle is resolved well but the sides of the teapot
have problems. In the right image, the surface acne is gone from the sides,
but now the bias is so high that shadows are not cast on the plane or lid. 
Another problem is the drop in frame rate when using perspective shadow
maps. You essentially have to render the scene twice, once from the light
source to compute the shadow map and once from the camera. I have thought
about precomputing perspective shadow maps. Since they are dependent on
the camera orientation and position, I would have to discretize both space
and directions. Directions would have to be discretized fairly finely due
to the sensitivity of the perspective map to camera orientation. That could
lead to enormous storage costs that would outweigh the benefits of precomputation.
Future Direction
I am not sure that perspective shadow maps are the way to go. I
am going to drop the perspective maps into a more complex rendering system
capable of handling the power plant model. On a real scene I will be able
to determine whether or not they are really viable. If not, then I am going
to try an adaptive approach that subdivides the shadow map where more resolution
is needed.