Publication
Transactions on Embedded Computing Systems
Paper

OnSRAM: Efficient Inter-Node On-Chip Scratchpad Management in Deep Learning Accelerators

View publication

Abstract

Hardware acceleration of Artificial Intelligence (AI) workloads has gained widespread popularity with its potential to deliver unprecedented performance and efficiency. An important challenge remains in how AI accelerators are programmed to sustain high utilization without impacting end-user productivity. Prior software optimizations start with an input graph and focus on node-level optimizations, viz. dataflows and hierarchical tiling, and graph-level optimizations such as operation fusion. However, little effort has been devoted to inter-node on-chip scratchpad memory (SPM) management in Deep Learning (DL) accelerators, whose significance is bolstered by the recent trends in complex network topologies and the emergence of eager execution in DL frameworks.We characterize and show that there exists up to a 5.2× performance gap in DL inference to be bridged using SPM management and propose OnSRAM, a novel SPM management framework integrated with the compiler runtime of a DL accelerator. We develop two variants, viz. OnSRAM-Static, which works on static graphs to identify data structures that can be lucratively held on-chip based on their size, liveness and significance, and OnSRAM-Eager, which targets an eager execution model (no graph) and uses a history-based speculative scheme to hold/discard data structures. We integrate OnSRAM with TensorFlow and analyze it on multiple accelerator configurations. Across a suite of 12 images, objects, and language networks, on a 3 TFLOP system with a 2 MB SPM and 32 GBps external memory bandwidth, OnSRAM-Static and OnSRAM-Eager achieve 1.02-4.8× and 1.02-3.1× reduction in inference latency (batch size of 1), over a baseline with no SPM management. In terms of energy savings, we observe average reductions of 1.51× (up to 4.1×) and 1.23× (up to 2.9×) for the static and eager execution scenarios, respectively.