About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Publication
DATE 2020
Invited talk
Analog resistive crossbar arrays for neural network acceleration
Abstract
We will discuss the development of analog resistive crossbar arrays for neural network acceleration, and more specifically, for deep learning (DL). Near term, the efficiency of digital DL accelerators can be improved by running the neural network arithmetic at lower precision. We will see how that can be done without sacrificing accuracy on the neural network task. Beyond that, non-volatile memory-based in-memory computing is a long-term path to higher deep learning compute efficiency. We are developing analog accelerators that are based on crossbar arrays of resistive devices. In practice, real devices we build deviate from their ideal properties in a number of ways. We will discuss adjustments to the deep learning algorithms that accommodate such non-idealities, to ensure that we are not improving compute efficiency at the expense of accuracy. To reach the goal of optimizing performance while leaving accuracy unaffected, co-development and co-optimization across the entire stack is required.