About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Publication
ICASSP 2021
Conference paper
Generalized Knowledge Distillation from An Ensemble of Specialized Teachers Leveraging Unsupervised Neural Clustering
Abstract
This paper proposes an improved generalized knowledge distillation framework with multiple dissimilar teacher networks, each of which is specialized for a specific domain, to make a deployable student network more robust to challenging acoustic environments. In this paper, we first address a method to partition the training data for constructing ensembles of the teachers from unsupervised neural clustering with features based on context-dependent phonemes representing each acoustic domain. Second, we illustrate how a single student network designed from partitioned data is effectively trained with multiple specialized teachers. During the training step, the weights of the student network are updated using a composite two-part cross entropy loss obtained from a pair consisting of a specialized teacher corresponding to input speech and a generalized teacher trained with a balanced data set. Unlike system combination methods, we aim to incorporate the benefits from multiple models into a single student network via knowledge distillation that does not increase any computational costs during the decoding time. The improvement of the proposed technique is shown on acoustically diverse signals contaminated by challenging practical noises.