About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Publication
ITA 2007
Conference paper
Variational sampling approaches to word confusability
Abstract
In speech recognition it is often useful to determine how confusable two words are. For speech models this comes down to computing the Bayes error between two HMMs. This problem is analytically and numerically intractable. A common alternative, that is numerically approachable, uses the KL divergence in place of the Bayes error. We present new approaches to approximating the KL divergence, that combine variational methods with importance sampling. The Bhattacharyya distance - a closer cousin of the Bayes error-turns out to be even more amenable to our approach. Our experiments demonstrate an improvement of orders of magnitude in accuracy over conventional methods.