About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Publication
ICASSP 2008
Conference paper
Optimizing speech recognition grammars using a measure of similarity between hidden Markov models
Abstract
In this paper we discuss a method of optimizing weights in a stochastic finite state grammar using a measure of similarity between hidden Markov models. We compute the similarity using an edit distance and weights that are derived from the Bhattacharyya error between pairs of Gaussian mixture models. Forward-backward procedures are used to carry out the similarity computation, and to obtain the derivatives needed in gradient descent based optimization. We apply this procedure to the problem of estimating parameters of garbage models that are often included in SRGS grammars. Experimental results indicate that the method improves the garbage models and naturally results in models that are a function of their context in the grammar. ©2008 IEEE.