Publication
CSTIC 2019
Conference paper

Accelerating deep neural networks with analog memory devices

View publication

Abstract

Deep Neural Networks (DNNs) are very large artificial neural networks trained using very large datasets, typically using the supervised learning technique known as backpropagation. Currently, CPUs and GPUs are used for these computations.Over the next few years, we can expect special-purpose hardware accelerators based on conventional digital-design techniques to optimize the GPU framework for these DNN computations. Here there are opportunities to increase speed and reduce power for two distinct but related tasks: training and forward-inference. During training, the weights of a DNN are adjusted to improve network performance through repeated exposure to the labelled data-examples of a large dataset. Often this involves a distributed network of chips working together in the cloud. During forward-inference, already trained networks are used to analyze new data-examples, sometimes in a latency-constrained cloud environment and sometimes in a power-constrained environment (sensors, mobile phones, 'edge-of-network' devices, etc.)Even after the improved computational performance and efficiency that is expected from these special-purpose digital accelerators, there would still be an opportunity for even higher performance and even better energy-efficiency from neuromorphic computation based on analog memories (Fig. 1).

Date

01 Mar 2019

Publication

CSTIC 2019