AutoFair: Human-compatible Automation of Fairness in AI

Addressing the need for trusted AI in a range of practical industrial applications from recruitment, to fintech and advertising.


The AutoFair project seeks to address needs for trusted AI and user-in-the-loop tools and systems in a range of industry applications. Together with use-case partners, the project aims to develop methods related to user-in-the-loop methods, model transparency, fairness, and bias mitigation. Industry use-cases from partners in recruitment (Workable) deal with analysis of bias in screening and job offers, in FinTech (dateio) on uses of ML models for card-linked financial products, and advertising (IBM Watson Advertising) addressing fairness in sequential decision making settings.

autofair.pngAutoFair project overview and work packages.

Our group focuses on developing human-centered methods and tools for capturing user-induced feedback loops, assessment of AI model safety, and validation with the Watson Advertising business unit. The work brings together strands from AI automation, foundations of trusted AI through transparency and fairness and human-centered AI with user-control and value alignment through interactive AI. 

Flexible certification of fairness

At the one end, we consider risk averse a priori guarantees on certain bias measures as hard constraints in the training process. At the other end, we consider post hoc comprehensible but thorough presentation of all of the tradeoffs involved in the AI pipeline design and their effect on industrial and bias outcomes. 

User-in-the-loop in continuous iterative engagement among AI systems, their developers and users

We seek to both inform the users thoroughly about the possible algorithmic choices and their expected effects, and to learn their preferences in regards to different fairness measures. We subsequently aim to guide decision making, bringing together the benefits of automation in a human-compatible manner. 

Toolkits for the automatic identification of various types of bias

We jointly consider and optimize for potentially conflicting objectives (fairness/performance/runtime/resources) and visualize tradeoffs making it possible to communicate them to practitioners, relevant government agencies, NGOs, or members of the public.

More information is available on the AutoFair website.