I plan to investigate the use of video streams to represent far-field geometry in massive models. The Walkthrough project uses two such models: one is the Brooks House, with some 1.7 million polygons scattered through several connected rooms, and the other is the Power Plant, a 15-million-triangle CAD model of a coal-fired power plant.
The basic idea is simple: do not even attempt to render anything that is not ultimately visible. This can mean culling geometry that lies out of view, whether outside the view frustum or occluded by other objects. It can also mean rendering objects more simply, as with geometric levels-of-detail, or replacing them entirely with some inexpensive impostor, as with textured depth meshes.
I'm going to assume that the system in question uses viewpoint cells to segregate the model into near- and far-field geometry. (This is how the MMR works. For more information, see the tech report (PDF).) One issue that has come up in the Walkthrough team's work is how best to represent the far field. At the moment, we use textured depth meshes, which are (roughly) a snapshot of the far-field geometry displayed at a roughly correct depth. There is a great deal of coherence between corresponding TDM for adjacent cells, but none of this coherence is currently exploited. That's where this project comes in.
For a moment, consider the textured depth meshes as a snapshot of the far field displayed on a flat surface a certain distance in front of the eye. The TDMs along a straight path behave like frames in a movie shot while walking down a hallway: in particular, successive images are quite similar. I intend to represent these images as a video stream instead of compressing each one individually. This should make it less costly to initialize the TDMs for cells adjacent to the current viewpoint, as their textures will often be represented as a compact set of changes from the current image instead of an entirely separate image to be loaded and unpacked.
This makes the choice of compression critical. Many popular methods are intended only to go forward; for this method to work, it must be possible to go either forward or backward in the stream in roughly the same amount of time.