Inference of Deep Neural Networks with Analog Memory Devices
Stefano Ambrogio, Pritish Narayanan, et al.
VLSI-TSA 2020
Recent advances in deep learning have been driven by ever-increasing model sizes, with networks growing to millions or even billions of parameters. Such enormous models call for fast and energy-efficient hardware accelerators. We study the potential of Analog AI accelerators based on Non-Volatile Memory, in particular Phase Change Memory (PCM), for software-equivalent accurate inference of natural language processing applications. We demonstrate a path to software-equivalent accuracy for the GLUE benchmark on BERT (Bidirectional Encoder Representations from Transformers), by combining noise-aware training to combat inherent PCM drift and noise sources, together with reduced-precision digital attention-block computation down to INT6.
Stefano Ambrogio, Pritish Narayanan, et al.
VLSI-TSA 2020
Stefano Ambrogio, Pritish Narayanan, et al.
VLSI-TSA 2020
H. Y. Chang, Geoffrey W. Burr, et al.
IBM J. Res. Dev
Kafai Lai, Chi-Chun Liu, et al.
SPIE Advanced Lithography 2013