Publication
VLSI Technology 2023
Invited talk

Phase Change Memory-based Hardware Accelerators for Deep Neural Networks

View publication

Abstract

Analog non-volatile memory (NVM)-based accelerators for deep neural networks implement multiply-accumulate (MAC) operations -- in parallel, on large arrays of resistive devices -- by using Ohm's law and Kirchhoff's current law. By completely avoiding weight motion, such fully weight-stationary systems can offer a unique combination of low latency, high throughput, and high energy-efficiency (e.g., high TeraOPS/W). Yet since most Deep Neural Networks (DNNs) require only modest (e.g., 4-bit) precision in synaptic operations, such systems can still deliver "software-equivalent" accuracies on a wide range of models. We describe a 14-nm inference chip, comprising multiple 512x512 arrays of Phase Change Memory (PCM) devices, which can deliver software-equivalent inference accuracy for MNIST handwritten-digit recognition and recurrent LSTM benchmarks, and discuss various PCM challenges such as conductance drift and noise.