An evaluation of a nonlinear feature transformation for conversational speech recognition
Abstract
We test the nonlinear symplectic maximum-likelihood transformation (SMLT) on two large-vocabulary, conversational speech recognition tasks: IBM's Superhuman test and the DARPA 2003 Rich Transcription (RT03) test. Features in these tests are computed via linear discriminant analysis (LDA) on spliced MFCC features and subsequent transformation of the projected features using either a maximum-likelihood linear transformation (MLLT), an SMLT, or both. In contrast to previous tests of the SMLT on TIMIT phone recognition with static and delta MFCCs, these tests use a more difficult task and very different features. The four results of this work are that both LDA+MLLT and LDA+SMLT systems outperform an LDA-only system; the LDA+MLLT system outperforms the LDA+SMLT system (but the MLLT has 20 times more parameters than the SMLT); small improvements over an LDA+MLLT system are obtained with an LDA+MLLT+SMLT system on well-matched material; and no improvements are obtained using two class-dependent SMLTs in an LDA+MLLT+SMLT system.