Large Mesh Simplification using Processing Sequences

Martin Isenburg      Peter Lindstrom      Stefan Gumhold      Jack Snoeyink

abstract:
In this paper we show how out-of-core mesh processing techniques can be adapted to perform their computations based on the new processing sequence paradigm, which was inspired by the mesh access that our out-of-core compressor provides during decompression, using mesh simplification as an example. We believe that this processing concept will also prove useful for other tasks, such as parameterization, remeshing, or smoothing, for which currently only in-core solutions exist.

A processing sequence represents a mesh as a particular interleaved ordering of indexed triangles and vertices. This representation allows streaming very large meshes through main memory while maintaining information about the visitation status of edges and vertices. At any time, only a small portion of the mesh is kept in-core, with the bulk of the mesh data residing on disk. Mesh access is restricted to a fixed traversal order, but full connectivity and geometry information is available for the active elements of the traversal. This provides seamless and highly efficient out-of-core access to very large meshes for algorithms that can adapt their computations to this fixed ordering.

The two abstractions that are naturally supported by this representation are boundary-based and buffer-based processing. We illustrate both abstractions by adapting two different simplification methods to perform their computation using a prototype of our mesh processing sequence API. Both algorithms benefit from using processing sequences in terms of improved quality, more efficient execution, and smaller memory footprints.

main contributions:

  • proof-of-concept application for streaming processing of meshes
  • exploits a streaming input format for memory and I/O-efficient computation
  • processes meshes directly from their out-of-core compressed representation
  • allows to simplify meshes when input and output are much larger than memory
  • will eventually inspire a more general streaming mesh format

  • publications:

  • [ilgs-lmsps-03.pdf] Martin Isenburg, Peter Lindstrom, Stefan Gumhold, Jack Snoeyink, Large Mesh Simplification using Processing Sequences, Proceedings of Visualization'03, pages 465-472, October 2003.
  • related publications:

  • processing sequences (unpublished brain-storming draft, april 2003)
  • [ig-ooccgpm-03.pdf slides] Martin Isenburg, Stefan Gumhold, Out-of-Core Compression for Gigantic Polygon Meshes, Proceedings of SIGGRAPH'03, pages 935-942, July 2003.
  • [il-sm-05.pdf slides] Martin Isenburg, Peter Lindstrom, Streaming Meshes, Proceedings of Visualization'05, pages 231-238, October 2005.
  • download:

  • slides: lmsps.ppt (11 MB)
  • hindsights:
    The idea of sequenced processing grew out of the streaming, small memory-footprint mesh access provided by our compressed format. However, it turns out that the depth-first fashion in which our original scheme was decompressing triangles and vertices resulted in orderings that were far from optimal for some processing tasks. In our case such poorly ordered input sequences can have a negative impact on the quality of buffer-based simplification. When the processing boundary advances in a highly non-uniform manner, some mesh elements spend considerably more time in the triangle buffer than others. The randomized way in which edge-collapse operations are applied to this triangle buffer is likely to simplify those elements more heavily than those of areas where the processing boundary passes through more quickly.
    But even if the input sequence is ordered more coherently, the buffer-based simplification method, tends to produce highly incoherent output sequences. The reason is that the WRITE operation, which decides in which order the mesh elements are output, only takes the maximal quadric error, but not the time spend in the triangle buffer into account. This results in newer mesh elements of slightly higher quadric error being repeatedly favored for output, which can leave behind many small islands of older elements that remain in the buffer for a long time. When we chose to follow a mainly error-driven output strategy as proposed by Wu and Kobbelt we did not yet have a full understanding of what is important for streaming.
    Finally, our definitions about which triangle sequences are allowable processing sequences were overly restrictive. Apart from the fact that growing the mesh in an edge-connected manner results in a smaller maximal footprint as it streams through memory, there is really no good reason to categorically disallow triangles that are only vertex-adjacent to the processing boundary. These restrictions grew out of the triangle orderings of our compressed format that seemed ideal for processing, but that unnecessarily complicate producing and maintaining processing sequences. The main advantage of our compressed format was the coherent order of elements, the information on vertex finalization, the additional information about the topological type of vertices and edges, and the ability to store information along the evolving boundary. But we can provide the same functionality for a less restricted ordering of mesh elements stored in our streaming mesh format In this sense, processing sequences really can be seen as an abstraction of the functionality that any streaming mesh format can provide.