About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Publication
ICASSP 2003
Conference paper
Covariance and precision modeling in shared multiple subspaces
Abstract
In this paper we introduce a class of Gaussian mixture models for HMM states in continuous speech recognition. In these models the covariances or the precisions (inverse covariances) are restricted to lie in subspaces spanned by rank-one symmetric matrices, In both cases the rank-one matrices are shared across classes of Gaussians. We show that for the same number of parameters modeling precisions leads to better performance when compared to modeling covariances. Modeling precisions however gives a distinct advantage in computational and memory requirements. We also show that this class of models provides improvement in accuracy (for the same number of parameters) over classical factor analysed models and the recently proposed EMLLT models which are special instances of this class of models.