LATTE: Low-power Audio Transform with TrueNorth Ecosystem
Abstract
With recent advances in silicon technology, previously intractable Deep Neural Network (DNN) solutions to complex visual, auditory, and other sensory perception problems are now practical for real-time, energy constrained systems. One such advancement is IBM's TrueNorth neurosynaptic processor, containing 1 million neurons and 256 million synapses, consuming 65mW of power, and capable of operating in real-time for a variety of applications. In this work, we explore how auditory features can be extracted on the TrueNorth processor using low numerical precision while maintaining algorithmic fidelity for DNN based spoken digit recognition on isolated words from the TIDIGITS dataset. Further, we show that our Low-power Audio Transform with TrueNorth Ecosystem (LATTE) is capable of achieving a 24× reduction in energy for feature extraction over a baseline FPGA implementation using standard MFCC audio features, while only incurring a 3-6% accuracy penalty.