Emerging Neural Workloads and Their Impact on Hardware
Abstract
We consider existing and emerging neural workloads, and what hardware accelerators might be best suited for said workloads. We begin with a discussion of analog crossbar arrays, which are known to be well-suited for matrix-vector multiplication operations that are commonplace in existing neural network models such as convolutional neural networks (CNNs). We highlight candidate crosspoint devices, what device and materials challenges must be overcome for a given device to be employed in a crossbar array for a computationally interesting neural workload, and how circuit and algorithmic optimizations may be employed to mitigate undesirable characteristics from devices/materials. We then discuss two emerging neural workloads. We first consider machine learning models for one- and few-shot learning tasks (i.e., where a network can be trained with just one or a few, representative examples of a given class). Notably crossbar-based architectures can be used to accelerate said models. Hardware solutions based on content addressable memory arrays will also be discussed. We then consider machine learning models for recommendation systems. Recommendation models, an emerging class of machine learning models, employ distinct neural network architectures that operate of continuous and categorical input features which make hardware acceleration challenging. We will discuss the open research challenges and opportunities within this space.