START Conference Manager    

Scheduling as a Learned Art

Christopher Gill, William Smart, Terry Tidwell and Robert Glaubius

Workshop on Operating Systems Platforms for Embedded Real-Time applications (OSPERT 2008)
Prague, Czech Republic, July 1 2008


Summary

Scheduling the execution of multiple concurrent tasks on shared resources such as CPUs and network links is essential to ensuring the reliable and correct operation of real-time systems. For closed hard real-time systems in which task sets and the dependences among them are well known a priori, well known real-time scheduling techniques can offer rigorous timing and preemption guarantees. However, for open soft-real-time systems in which task sets and dependences may vary or may not be known a priori, and for which we would still like reasonably good real-time behavior, new scheduling techniques are needed.

Our recent work has shown that modeling non-preemptive resource sharing between threads as a Markov Decision Process (MDP) produces (1) an analyzable utilization state space, and (2) a representation of a scheduling decision policy based on the MDP, even when task execution times are loosened from exact values to known distributions within which their execution times may vary. However, if dependences among tasks, or the distributions of their execution times are not known, then how to obtain the appropriate MDP remains an open problem.

In this paper, we posit that this problem formulation is likely well suited for applying reinforcement learning (RL) techniques. In doing so, our goal is to overcome a lack of knowledge about how the tasks behave internally, by recording their utilizations (states) and their actions (which tasks are scheduled), and observing the transitions among states under different actions. We give an overview of our progress to date, and our planned research over the next year or so in this direction.


START Conference Manager (V2.54.6)