Management of multilevel, multiclient cache hierarchies with application hints
Abstract
Multilevel caching, common in many storage configurations, introduces new challenges to traditional cache management: data must be kept in the appropriate cache and replication avoided across the various cache levels. Additional challenges are introduced when the lower levels of the hierarchy are shared by multiple clients. Sharing can have both positive and negative effects. While data fetched by one client can be used by another client without incurring additional delays, clients competing for cache buffers can evict each other's blocks and interfere with exclusive caching schemes. We present a global noncentralized, dynamic and informed management policy for multiple levels of cache, accessed by multiple clients. Our algorithm, MC2, combines local, per client management with a global, system-wide scheme, to emphasize the positive effects of sharing and reduce the negative ones. Our local management scheme, Karma, uses readily available information about the client's future access profile to save the most valuable blocks, and to choose the best replacement policy for them. The global scheme uses the same information to divide the shared cache space between clients, and to manage this space. Exclusive caching is maintained for nonshared data and is disabled when sharing is identified. Previous studies have partially addressed these challenges through minor changes to the storage interface. We show that all these challenges can in fact be addressed by combining minor interface changes with smart allocation and replacement policies. We show the superiority of our approach through comparison to existing solutions, including LRU, ARC, Multi Q, LRU-SP, and Demote, as well as a lower bound on optimal I/O response times. Our simulation results demonstrate better cache performance than all other solutions and up to 87% better performance than LRU on representative workloads. © 2011 ACM.