Effectiveness of the backoff hierarchical class n-gram language models to model unseen events in speech recognition
Abstract
Backoff hierarchical class n-gram language models use a class hierarchy to define an appropriate context. Each node in the hierarchy is a class containing all the words of the descendant nodes (classes). The closer a node is to the root, the more general the corresponding class and consequently the context is. In this paper we experimentally demonstrate the effectiveness of the backoff hierarchical class n-gram language modeling approach to model unseen events in speech recognition: better improvement is achieved over regular backoff n-gram models. We also study the performance of this approach on vocabularies of different sizes and we investigate the impact of the hierarchy depth on the performance of the model. Performance is presented on several databases such as switchboard, call-home and Wall Street Journal (WSJ). Experiments on switchboard and call-home databases, which contain small number of unseen events in the test set, show up to 6% improvement on the unseen events perplexity with a vocabulary of 16, 800 words. With a relatively large number of unseen events on WSJ test corpus using two vocabulary sets of 5, 000 and 20, 000 words, we obtain up to 26% improvement on the unseen events perplexity and up to 12% improvement in the WER when a backoff hierarchical class trigram language model is used on an ASR test set. Results confirm that better improvement is achieved when the number of unseen events increases.