Jihun Yun, Aurelie Lozano, et al.
NeurIPS 2021
Post-processing in algorithmic fairness is a versatile approach for correcting bias in ML systems that are already used in production. The main appeal of post-processing is that it avoids expensive retraining. In this work, we propose a suite of general post-processing algorithms for individual fairness (IF). We consider a setting where the learner only has access to the predictions of the original model and a similarity graph between individuals guiding the desired fairness constraints. We cast the IF post-processing problem as a graph smoothing problem corresponding to graph Laplacian regularization that preserves the desired treat similar individuals similarly interpretation. Our theoretical results demonstrate the connection of the new objective function to a local relaxation of the original individual fairness. Empirically, our post-processing algorithms correct individual biases in large scale NLP models, e.g., BERT, while preserving accuracy.
Jihun Yun, Aurelie Lozano, et al.
NeurIPS 2021
Assala Benmalek, Celia Cintas, et al.
MICCAI 2024
Claudio Santos Pinhanez, Raul Fernandez, et al.
IUI 2024
Mateo Espinosa Zarlenga, Gabriele Dominici, et al.
ICML 2025