Rapid adaptation using penalized-likelihood methods
Abstract
In this paper, we introduce new rapid adaptation techniques that extend and improve two successful methods previously introduced, cluster weighting (CW) and MAPLR. First, we introduce a new adaptation scheme called CWB which extends the cluster weighting adaptation method by including a bias term and a reference speaker model. CWB is shown to improve the adaptation performance as compared to CW. Second, we introduce an extension of cluster weighting that uses penalized-likelihood objective functions to stabilize the estimation and provide soft constraints. Third, we propose a variant of MAPLR adaptation that uses prior speaker information. Previously, prior distributions of transforms in MAPLR were obtained using the same adaptation data, speaker independent HMM means or by some heuristics. We propose to use the prior information of speaker variability to obtain the priors, by using CW or CWB weights. Penalized-likelihood or Bayesian theory serves as a tool to combine transformation based and prior speaker information based adaptation methods resulting in effective rapid adaptation techniques. The techniques are shown to outperform full, block diagonal and diagonal MLLR as well as some other recently proposed methods for rapid adaptation.