Publication
EMNLP 2023
Conference paper

Knowledge Distillation ≈ Label Smoothing: Fact or Fallacy?

Download paper

Abstract

Originally proposed as a method for knowledge transfer from one model to another, some recent studies have suggested that knowledge distillation (KD) is in fact a form of regularization. Perhaps the strongest argument of all for this new perspective comes from its apparent similarities with label smoothing (LS). Here we re-examine this stated equivalence between the two methods by comparing the predictive confidences of the models they train. Experiments on four text classification tasks involving models of different sizes show that: (a) In most settings, KD and LS drive model confidence in completely opposite directions, and (b) In KD, the student inherits not only its knowledge but also its confidence from the teacher, reinforcing the classical knowledge transfer view.

Date

06 Dec 2023

Publication

EMNLP 2023

Authors

Topics

Resources

Share