About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Publication
AAIEA 2020
Invited talk
Analog resistive crossbar arrays for deep learning acceleration
Abstract
Chip architectures based on resistive crossbar arrays have the potential to surpass digital accelerators in terms of deep learning performance and energy efficiency. In such circuits, neural network weights can be represented by conductances of analog resistive devices at each crosspoint, allowing for a parallel vector-matrix multiplication operation to be performed. Noting that deep learning algorithms are rather robust to reduced arithmetic precision, we discuss criteria analog devices have to meet in order to be suitable for deep learning inference or training. We then give an overview of candidate device technologies, some of which were originally developed for non-volatile memory applications. Finally, we discuss algorithmic innovations designed to accommodate analog device non-idealities.