Publication
IEEE TPDS
Paper

Using Processor-Cache Affinity Information in Shared-Memory Multiprocessor Scheduling

View publication

Abstract

In a shared-memory multiprocessor system, it may be more efficient to schedule a task on one processor than on another if relevant data already resides in a particular processor’s cache. In this paper we study the effects of this type of processor affinity. We make the observation that tasks continuously alternate between executing at a processor and releasing this processor due to I/O, synchronization, quantum expiration, or preemption. These factors suggest that ignoring processor-cache affinity, which is typically the case in existing multiprocessor operating systems, can degrade performance. On the other hand, fixing tasks to run on specific processors may not be an appropriate alternative due to the potential load imbalance and the transitory nature of processor-cache affinity. We formulate queueing network models of different abstract scheduling policies, spanning the range from ignoring affinity to fixing tasks on processors. These models are solved via Mean Value Analysis, where possible, and by simulation otherwise. An analytic cache model is developed and used in these scheduling models to include the effects of an initial burst of cache misses experienced by tasks when they return to a processor for execution. A mean-value technique is also developed and used in the scheduling models to include the effects of increased bus traffic due to these bursts of cache misses. In comparing the different policies, our analysis shows that exploiting even the simplest forms of processor-cache affinity can potentially provide significant improvements over ignoring this affinity. On the other hand, scheduling decisions cannot be based solely on processor-cache affinity else other scheduling criteria, such as fairness, will be sacrificed. We show that only a small amount of affinity information needs to be maintained for each task, and demonstrate the importance of having a policy that adapts its behavior to changes in system load. Given the insights provided by our analysis, we develop a variety of practical scheduling policies ranging from simple management of local queues to augmenting a priority discipline with affinity information. © 1993 IEEE

Date

Publication

IEEE TPDS

Authors

Share