Publication
HPCA 2023
Tutorial

Deep learning inference using computational phase-change memory

Abstract

The computing systems that run today’s AI algorithms are based on the von Neumann architecture which is inefficient at the task of shuttling huge amounts of data back and forth at high speeds. Thus, to build efficient cognitive computers, we need to transition to novel architectures where memory and processing are better collocated. In-memory computing is one such approach where the physical attributes and state dynamics of memory devices are exploited to perform certain computational tasks in place with very high areal and energy efficiency. In this tutorial, I will present our latest efforts in employing such a computational memory architecture for performing inference of deep neural networks. First, the phase-change memory technology we use as computational memory will be described. Next, the application of computational memory to neural network inference will be explained, and experimental results will be presented based on a state-of-the-art fully-integrated 64-core computational phase-change memory chip. Finally, I will present our open-source toolkit (https://analog-ai.mybluemix.net/) to simulate inference and training of neural networks with computational memory.

Date

Publication

HPCA 2023

Authors

Topics

Share