About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Publication
DSD 2022
Conference paper
CaW-NAS: Compression Aware Neural Architecture Search
Abstract
With the ever-growing demand for deep learning (DL) at the edge, building small and efficient DL architectures has become a significant challenge. Optimization techniques such as quantization, pruning or hardware-aware neural architecture search (HW-NAS) have been proposed. In this paper, we present an efficient HW-NAS; Compression-Aware Neural Architecture search (CaW-NAS), that combines the search for the architecture and its quantization policy. While former works search over a fully quantized search space, we define our search space with quantized and non-quantized architectures. Our search strategy finds the best trade-off between accuracy and latency according to the target hardware. Experimental results on a mobile platform show that, our method allows to obtain more efficient networks in terms of accuracy, execution time and energy consumption when compared to the state of the art.