Publication
AICAS 2020
Conference paper
Accelerating Deep Neural Networks with Analog Memory Devices
Abstract
Acceleration of training and inference of Deep Neural Networks (DNNs) with non-volatile memory (NVM) arrays, such as Phase-Change Memory (PCM), shows promising advantages in terms of energy efficiency and speed with respect to digital implementations using CPUs and GPUs. By leveraging a combination of PCM devices and CMOS circuits, high training accuracy can be achieved, leading to software-equivalent results on small and medium datasets. In addition, weights encoded with multiple PCM devices can lead to high speed and low-power inference, as shown here for Long-Short Term Memory (LSTM) networks.