Publication
ICML 2022
Workshop paper

An Empirical Study of Modular Bias Mitigators and Ensembles

Abstract

Bias mitigators can reduce algorithmic bias in machine learning models, but their effect on fairness is often not stable across different data splits. A popular approach to train more stable models is ensemble learning. We built an open-source library enabling the modular composition of 10~mitigators, 4~ensembles, and their corresponding hyperparameters. We empirically explored the space of combinations on 13 datasets and distilled the results into a guidance diagram for practitioners.

Date

22 Jul 2022

Publication

ICML 2022