About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Publication
DAC 2018
Conference paper
DyHard-DNN: Even more DNN acceleration with dynamic hardware reconfiguration
Abstract
Deep Neural Networks (DNNs) have demonstrated their utility across a wide range of input data types, usable across diverse computing substrates, from edge devices to datacenters. This broad utility has resulted in myriad hardware accelerator architectures. However, DNNs exhibit significant heterogeneity in their computational characteristics, e.g., feature and kernel dimensions, and dramatic variances in computational intensity, even between adjacent layers in one DNN. Consequently, accelerators with static hardware parameters run sub-optimally and leave energy-efficiency margins unclaimed. We propose DyHard-DNNs, where accelerator microarchitectural parameters are dynamically reconfigured during DNN execution to significantly improve metrics of interest. We demonstrate the effectiveness of this approach on a configurable SIMD 2D systolic array and show a 15-65% performance improvement (at iso-power) and 25-90% energy improvement (at iso-latency) over the best static configuration in six mainstream DNN workloads.