Evolution of Memory Architecture
Abstract
Computer memories continue to serve the role that they first served in the electronic discrete variable automatic computer (EDVAC) machine documented by John von Neumann, namely that of supplying instructions and operands for calculations in a timely manner. As technology has made possible significantly larger and faster machines with multiple processors, the relative distance in processor cycles of this memory has increased considerably. Microarchitectural techniques have evolved to share this memory across ever-larger systems of processors with deep cache hierarchies and have managed to hide this latency for many applications, but are proving to be expensive and energy-inefficient for newer types of problems working on massive amounts of data. New paradigms include scale-out systems distributed across hundreds and even thousands of nodes, in-memory databases that keep data in memory much longer than the duration of a single task, and near-data computation, where some of the computation is off-loaded to the location of the data to avoid wasting energy in the movement of data. This paper provides a historical perspective on the evolution of memory architecture, and suggests that the requirements of new problems and new applications are likely to fundamentally change processor and system architecture away from the currently established von Neumann model.