Flexible Fair Scheduling on Multiprocessors
Principal Investigator: James Anderson
Funding Agency: National Science Foundation
Agency Number: CCR-0204312
Abstract
There has been much recent work on scheduling techniques that ensure fairness, temporal isolation. and timeliness among applications multiplexed on the same resource. Much of this work is rooted in an idealized scheduling abstraction called generalized processor sharing (GPS). Under CPS scheduling, each task (i.e., application thread) is assigned a weight, which determines its share of the resource. By adjusting weights as the overall workload changes, each task's designated share can be guaranteed (fairness) and any "misbehaving" task can be prevented from consuming more than its share (temporal isolation). In addition, real-time deadlines can be guaranteed where feasible (timeliness). These properties are desirable in systems where independently authored applications concurrently execute under a common framework. Many systems that could benefit from the use of fair scheduling principles have workloads that necessitate the use of multiple processors; consider, for example, the proliferation of Internet service providers that host third-party websites on multiprocessor servers. Unfortunately, most prior work on multiprocessor fair scheduling has been rooted in task models that are too rigid to apply in many settings. The little research on more flexible models that has been done has been almost entirely experimental in nature, with little or no formal analysis of the scheduling algorithms being tested. There is a simple reason for this: formally reasoning about multiprocessor fair scheduling algorithms is extremely hard a As part Of a Currently-funded NSF project (which is ending soon), we have taken some initial steps towards the development of a flexible fair scheduling framework based on algorithms that have been analytically verified. Our most significant result to date is a rate-based fair scheduling algorithm for multiprocessors that seamlessly copes with jittered processing steps. While this algorithm represents an important advance, there are many unresolved issues; these will be the focus of the proposed project. Specific objectives include the development of (i) fair scheduling algorithms that minimize task migrations: (ii) extensions to these algorithms that allow tasks to synchronize and share resources-. (iii) extensions that allow real-time and lionreal-time applications to execute under a common framework- and (iv) techniques for managing dynamicallychanging workloads in which tasks may join and leave the system. As a proof-of-concept test of our work, ve will experiment with a variety of multiplexed applicat ions oil a multiprocessor server testbed. In joint work- with researchers at NC State, we also intend to experimentally evaluate our algorithms for scheduling packet flows in ,vave-division multiplexing networks (here. the "multiprocessor" to be scheduled is a single network link, on which several packet flows are to be multiplexed in parallel). The end-result of this project will be a suite of multiprocessor fair scheduling algorithms with formally verified properties that can be. flexibly and efficiently applied in a variety of' settings.

