Skip Navigation
Text:
Increase font size
Decrease font size

    Irregular Parallel Algorithms: Expression, Compilation, and Performance

    Principal Investigator: Siddhartha Chatterjee
    Funding Agency: National Science Foundation
    Agency Number: CCR-9711438

    Abstract
    This proposal deals with designing and implementing irregular applications on parallel computers by adapting ideas from nested data parallelism into familiar imperative and object-oriented programming languages to provide an architecture-independent programming system for sparse and irregular applications that delivers scalable high performance over a range of systems. We investigate fundamental issues and develop solutions for them in three areas: linguistic mechanisms to express such parallelism, compilation techniques to enable parallel execution, and the bottom-line performance of such codes.

    Data-parallel programming languages allow programmers to express parallelism through operations over the elements of a collection; nested data parallelism allows the elements of a collection to themselves be collections, and extends the scope of the data-parallel model to irregular domains such as graphs and trees while retaining its simplicity. Irregular mesh solvers, tree-structured n-body methods, particle-in-cell codes, and direct and iterative sparse matrix computations can all be expressed in the nested data-parallel model.

    Previous work in nested data parallelism has been largely confined to functional languages, whose unfamiliar syntax and lack of interoperability with numerical subroutine libraries have made them unattractive to the scientific programming community. We claim that features already available in HPF (pointer variables, doall loops, and segmented scan and reduction operations in the standard library) support the expression of nested data-parallel algorithms, but that compiler techniques to support this need to be developed.

    Object-oriented programming languages provide data abstraction, encapsulation of state, inheritance, and inter-object communication; such languages have been used previously to program parallel computers in a task-parallel manner. We claim that we can incorporate nested data parallelism into Java using a combination of class libraries, and preprocessing using techniques for flattening nested parallelism.

    This proposal will further investigate issues of expressiveness, compilation, and performance.

    Document Actions