Publication
ICASSP 2013
Conference paper

Unifying PLDA and polynomial kernel SVMS

View publication

Abstract

Probabilistic linear discriminant analysis (PLDA) is a generative model to explain between and within class variations. When the underlying latent variables are modelled by standard Gaussian distributions, the PLDA recognition scores can be evaluated as a dot product between a high dimensional PLDA feature vector and a weight vector. A key contribution of this paper is showing that the high dimensional PLDA feature vectors can be equivalently (in a non-strict sense) represented as the second-degree polynomial kernel induced features of the vectors formed by concatenating the two input vectors constituting a trial. This equivalence relationship paves the way for the speaker recognition problem to be viewed as a two-class support vector machine (SVM) training problem where higher degree polynomial kernels can give better discriminative power. To alleviate the large scale SVM training problem, we propose a kernel evaluation trick that greatly simplifies the kernel evaluation operations. In our experiments, a combination of multiple second degree polynomial kernel SVMs performed comparably to a state-of-the-art PLDA system. For the analysed test case, SVMs trained with third degree polynomial kernel reduced the EERs on average by 10% relative to that of the SVMs trained with second degree polynomial kernel. © 2013 IEEE.

Date

18 Oct 2013

Publication

ICASSP 2013

Authors

Share