About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Publication
NeurIPS 2000
Conference paper
Minimum bayes error feature selection for continuous speech recognition
Abstract
We consider the problem of designing a linear transformation ρ ∈ ℝp×n, of rank p ≤ n, which projects the features of a classifier x ∈ ℝn onto y = θx ∈ ℝp such as to achieve minimum Bayes error (or probability of misclassification). Two avenues will be explored: the first is to maximize the θ-average divergence between the class densities and the second is to minimize the union Bhattacharyya bound in the range of θ. While both approaches yield similar performance in practice, they outperform standard LDA features and show a 10% relative improvement in the word error rate over state-of-the-art cepstral features on a large vocabulary telephony speech recognition task.