About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Publication
ISSCC 2024
Conference paper
11.4 IBM NorthPole: An Architecture for Neural Network Inference with a 12nm Chip
Abstract
The Deep Neural Network (DNN) era was ushered in by the triad of algorithms, big data, and more powerful hardware processors for training large-scale neural networks. Now, the ubiquitous deployment of DNNs for neural inference in edge, embedded, and data center applications demands more power-efficient hardware processors, while attaining increasingly higher computational performance. To address this Inference Challenge, we developed the NorthPole Architecture and implemented a NorthPole Chip instantiation [1, 2].