It's time to revisit LRU vs. FIFO
Abstract
We revisit the question of the effectiveness of the popular LRU cache eviction policy versus the FIFO heuristic which attempts to give an LRU like behavior. Several past works have considered this question and commonly stipulated that while FIFO is much easier to implement, the improved hit ratio of LRU outweighs this. We claim that two main trends call for a reevaluation: new caches such as front-ends to cloud storage have very large scales and this makes managing cache metadata in RAM no longer feasible; and new workloads have emerged that possess different characteristics. We model the overall cost of running LRU and FIFO in a very large scale cache and evaluate this cost using a number of publicly available traces. Our main evaluation workload is a new set of traces that we collected from a large public cloud object storage service and on this new trace FIFO exhibits better overall cost than LRU. We hope that these observations reignite the evaluation of cache eviction policies under new circumstances and that the new traces, that we intend to make public, serve as a testing ground for such work.