About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Publication
NeurIPS 2020
Workshop paper
Learning to Design Fair and Private Voting Rules
Abstract
Various voting rules have been designed for the purpose of aggregating user preferences to make a collective decision. Besides satisfying the multi-source preferences as much as possible, many other properties of this scenario have been studied. In this paper we mainly focus on fairness and privacy: we want the collective decision process to be fair and also to preserve the privacy of those submitting the preferences. The fairness notion we define and study in this work relates to the preference providers, since often they are impacted by the collective decision. Consider different groups of a population (such as vegetarians and non-vegetarians deciding on a restaurant) having different preferences over the restaurant alternatives. A voting rule could lead to a decision preferred by a large group while being vastly disliked by smaller groups. Our notion of ``group fairness" tries to avoid this. For the privacy criterion, we choose the widely accepted notion of local differential privacy (local DP). We first study our new notion of fairness in classical voting rules. We then study the trade-offs between fairness and privacy, showing that it is not possible to always obtain unconstrained maximal economic efficiency with high fairness or high differential privacy. We then present a data-driven learning approach to designing new voting rules with customized properties, and use the learning framework to design fairer voting rules.