Publication
AAIEA 2020
Invited talk

Analog resistive crossbar arrays for deep learning acceleration

Abstract

Chip architectures based on resistive crossbar arrays have the potential to surpass digital accelerators in terms of deep learning performance and energy efficiency. In such circuits, neural network weights can be represented by conductances of analog resistive devices at each crosspoint, allowing for a parallel vector-matrix multiplication operation to be performed. Noting that deep learning algorithms are rather robust to reduced arithmetic precision, we discuss criteria analog devices have to meet in order to be suitable for deep learning inference or training. We then give an overview of candidate device technologies, some of which were originally developed for non-volatile memory applications. Finally, we discuss algorithmic innovations designed to accommodate analog device non-idealities.

Date

24 Sep 2020

Publication

AAIEA 2020

Authors

Share