Kenneth L. Clarkson, Elad Hazan, et al.
Journal of the ACM
Off-the-shelf pre-trained models are increasingly common in machine learning. When deployed in the real world, it is essential that such models are not just accurate but also demonstrate qualities like fairness. This paper takes a closer look at recently proposed approaches that edit a pretrained model for group fairness by re-weighting the training data. We offer perspectives that unify disparate weighting schemes from past studies and pave the way for new weighting strategies to address group fairness concerns.
Kenneth L. Clarkson, Elad Hazan, et al.
Journal of the ACM
Yuankai Luo, Veronika Thost, et al.
NeurIPS 2023
Aditya Malik, Nalini Ratha, et al.
CAI 2024
Stephen Obonyo, Isaiah Onando Mulang’, et al.
NeurIPS 2023