Publication
ICASSP 2003
Conference paper

Covariance and precision modeling in shared multiple subspaces

Abstract

In this paper we introduce a class of Gaussian mixture models for HMM states in continuous speech recognition. In these models the covariances or the precisions (inverse covariances) are restricted to lie in subspaces spanned by rank-one symmetric matrices, In both cases the rank-one matrices are shared across classes of Gaussians. We show that for the same number of parameters modeling precisions leads to better performance when compared to modeling covariances. Modeling precisions however gives a distinct advantage in computational and memory requirements. We also show that this class of models provides improvement in accuracy (for the same number of parameters) over classical factor analysed models and the recently proposed EMLLT models which are special instances of this class of models.

Date

Publication

ICASSP 2003

Authors

Share