Publication
SafeAI 2020
Conference paper

Fair enough: Improving fairness in budget-constrained decision making using confidence thresholds

Download paper

Abstract

Increasing concern about discrimination and bias in datadriven decision making systems has led to a growth in academic and popular interest in algorithmic fairness. Prior work on fairness in machine learning has focused primarily on the setting in which all the information (features) needed to make a confident decision about an individual is readily available. In practice, however, many applications allow further information to be acquired at a feature-specific cost. For example, when diagnosing a patient, the doctor starts with only a handful of symptoms but progressively improves the diagnosis by acquiring additional information before making a final decision. We show that we can achieve fairness by leveraging a natural affordance of this setting: The decision on when to stop acquiring more features and proceeding to predict. First, we show that by setting a single set of confidence thresholds for stopping, we can attain equal error rates across arbitrary groups. Second, we extend the framework to a set of group-specific confidence thresholds which ensure that a classifier achieves equal opportunity (equal falsepositive or false-negative rates). The confidence thresholds naturally achieve fairness by redistributing the budget across individuals. This leads to statistical fairness across groups but also addresses the limitation that current statistical fairness methods fail to provide any guarantees to individuals. Finally, using two public datasets, we confirm the effectiveness of our methods empirically and investigate the limitations.