Publication
IBM J. Res. Dev
Paper

Recent progress in deep end-to-end models for spoken language processing

View publication

Abstract

End-to-end models (or sequence-to-sequence models) based on deep neural networks have recently become popular within the machine learning community. These techniques are also increasingly used in automatic speech recognition as an alternative to the state-of-the-art, hybrid HMM-DNN (hidden Markov model, deep neural network) system. The end-to-end systems contain a purely neural architecture that eliminates the need of any time alignment between the input acoustic feature vector sequence and output phone sequence. In this paper, we present progress within the IBM Watson Multimodal Group on end-to-end models for spoken language processing. We present our work on two types of end-to-end models applied to speech-to-text and keyword search tasks, namely, 1) recurrent neural networks (RNNs) based on connectionist temporal classification loss, and 2) attention-based encoder-decoder RNNs. We present results on several languages (such as Pashto, Mongolian, Javanese, Amharic, Guarani, Dholuo, Igbo, and Georgian) from the Intelligence Advanced Research Projects Activity funded Babel Program. We also present a detailed analysis of some salient characteristics of these models compared with the state-of-the-art HMM-DNN hybrid systems, and also discuss future challenges in using such models for spoken language processing.

Date

01 Jul 2017

Publication

IBM J. Res. Dev

Authors

Share