Publication
DRC 2017
Conference paper

Scaling is over - What now

View publication

Abstract

Over the last half century, the device community was guided by two quintessential laws that set the roadmap for device work: (1) Moore's law that provided the commercial push to double device count in a cadence of approximately two years and (2) Dennard's scaling laws that provided the physics to do just that. These driving forces slowing down due to power constraints. In fact, the clock frequency did not change significantly since the 65nm node. However, the basic underlying van Neumann architecture, shown in Fig 1, that binds the compute unit with the memory has not changed since its conception in the late 1940s. It served well as general computing platform for the traditional workloads like transaction processing, data bases, etc. However as more and more data is collected and the desire to extract value from this data is growing, a class of workloads is emerging that is ill fitted to run on von Neumann machines. These so called cognitive workloads will handle massive amounts (10s of PB) of data compared to 100GB in traditional workloads. They do not require the same digital accuracy and are more noise tolerant. These relaxed requirements allow to create accelerator chips at improved compute efficiency compared to the general-purpose CPU [1], however within the context of the von Neumann architecture. Neuromorphic computing solutions will eliminate the von Neumann bottleneck by reducing data movement in the system.

Date

Publication

DRC 2017

Authors

Share