Allen Goldberg | Kestrel Institute |
---|---|
Jan Prins | University of North Carolina |
Lars Nyland | |
John Reif | Duke University |
Peter Mills |
The development of parallel software presents a unique set of problems that do not arise in the development of conventional sequential software.Principal among these problems is the fundamental and pervasive influence of the target parallel architecture on the software development process. Basic steps in the software development process such as software design and performance prediction exhibit great sensitivity to the target parallel architecture. Consequently, the development of parallel software is typically carried out in an architecture-dependent manner, with a fixed target architecture.
However, premature commitment to an architecture may cause difficulties. First, the architecture may not be the best for the problem at hand. This can be particularly true for architectures that offer low communication performance relative to computational performance. Such architectures require extensive attention to locality of the computation and the form of communication employed. These constraints may be impossible to achieve, or may only be achieved with effort inappropriate to the overall software development cost. Second, parallel computer architectures are still in a stage of rapid evolution. Architectural fashions and implementations change often, leading to a hardware climate characterized by rapid obsolescence. In this setting, the cost to redevelop and reimplement parallel software to track the hardware changes can easily be prohibitive.
A key objective of Rome Laboratories contract F30602-94-C-0037 is to formulate a methodology for the architecture-independent development of parallel software, that minimizes the risks of premature architectural commitments and can respond in a cost-effective manner to changes in target architecture and problem specification. The ultimate of goal of the contract is to define the requirements for a design tool that supports the activities that comprise the methodology.
A Design Methodology for Data-Parallel Applications (Task II Report),
Lars Nyland,
Jan Prins,
Allen Goldberg,
Peter Mills,
John Reif and
Robert Wagner.
Abstract. Data-parallelism is a relatively well-understood form of parallel computation, yet developing simple applications can involve substantial efforts to express the problem in low-level data-parallel notations. We describe a process of software development for data-parallel applications starting from high-level specifications, generating repeated refinements of designs to match different architectural models and performance constraints, supporting a development activity with cost-benefit analysis. Primary issues are algorithm choice, correctness and efficiency, followed by data decomposition, load balancing and message-passing coordination. Development of a data-parallel multitarget tracking application is used as a case study, showing the progression from high to low-level refinements. We conclude by describing tool support for the process.
A Refinement Methodology for Developing Data-Parallel
Applications (Extended Abstract of Task II Report)
by Lars Nyland,
Jan Prins,
Allen Goldberg,
Peter Mills,
John Reif and
Robert Wagner.
In proceedings of EuroPar'96.
Full paper, in postscript (91K)
Task Parallel Implementation of the JPDA Algorithm
by Robert Wagner.
Modeling study of multiple message-passing implementations of the JPDA.
This page accessed times.