Publication
DAC 2018
Conference paper

DyHard-DNN: Even more DNN acceleration with dynamic hardware reconfiguration

View publication

Abstract

Deep Neural Networks (DNNs) have demonstrated their utility across a wide range of input data types, usable across diverse computing substrates, from edge devices to datacenters. This broad utility has resulted in myriad hardware accelerator architectures. However, DNNs exhibit significant heterogeneity in their computational characteristics, e.g., feature and kernel dimensions, and dramatic variances in computational intensity, even between adjacent layers in one DNN. Consequently, accelerators with static hardware parameters run sub-optimally and leave energy-efficiency margins unclaimed. We propose DyHard-DNNs, where accelerator microarchitectural parameters are dynamically reconfigured during DNN execution to significantly improve metrics of interest. We demonstrate the effectiveness of this approach on a configurable SIMD 2D systolic array and show a 15-65% performance improvement (at iso-power) and 25-90% energy improvement (at iso-latency) over the best static configuration in six mainstream DNN workloads.

Date

24 Jun 2018

Publication

DAC 2018