About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Publication
IJCNN 2016
Conference paper
LATTE: Low-power Audio Transform with TrueNorth Ecosystem
Abstract
With recent advances in silicon technology, previously intractable Deep Neural Network (DNN) solutions to complex visual, auditory, and other sensory perception problems are now practical for real-time, energy constrained systems. One such advancement is IBM's TrueNorth neurosynaptic processor, containing 1 million neurons and 256 million synapses, consuming 65mW of power, and capable of operating in real-time for a variety of applications. In this work, we explore how auditory features can be extracted on the TrueNorth processor using low numerical precision while maintaining algorithmic fidelity for DNN based spoken digit recognition on isolated words from the TIDIGITS dataset. Further, we show that our Low-power Audio Transform with TrueNorth Ecosystem (LATTE) is capable of achieving a 24× reduction in energy for feature extraction over a baseline FPGA implementation using standard MFCC audio features, while only incurring a 3-6% accuracy penalty.