ISCAS 2023
Invited talk

Architectures and Circuits for Analog-memory-based Hardware Accelerators for Deep Neural Networks


We describe a highly heterogeneous and programmable accelerator architecture that combines analog NVM memory-array “Tiles” for weight-stationary, energy-efficient MAC operations, together with heterogeneous special-function Compute-Cores for auxiliary digital computation. Massively parallel vectors of neuron-activation data are exchanged over short distances using a dense and efficient circuit-switched 2D mesh, enabling a wide range of DNN workloads, including CNNs, LSTMs, and Transformers. We also describe a 14-nm inference chip consisting of multiple 512×512 arrays of Phase Change Memory (PCM) devices which implements multiple DNN benchmarks using such a circuit-switched 2D mesh.