About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Publication
INTERSPEECH 2013
Conference paper
Neural network acoustic models for the DARPA RATS program
Abstract
We present a comparison of acoustic modeling techniques for the DARPA RATS program in the context of spoken term detection (STD) on speech data with severe channel distortions. Our main findings are that both Multi-Layer Perceptrons (MLPs) and Convolutional Neural Networks (CNNs) outperform Gaussian Mixture Models (GMMs) on a very difficult LVCSR task. We discuss pre-training, feature sets and training procedures, as well as weight sharing and shift invariance to increase robustness against channel distortions. We obtained about 20% error rate reduction over our state-of-the-art GMM system. Additionally, we found that CNNs work very well for spoken term detection, as a result of better lattice oracle rates compared to GMMs and MLPs. Copyright © 2013 ISCA.