Fairness, Accountability, Transparency
Biases can lead to systematic disadvantages for marginalized individuals and groups — and they can arise in any point in the AI development lifecycle. To increase the accountability of high-risk AI systems, we're developing technologies to increase their end-to-end transparency and fairness.
Our work
IBM’s Stacy Hobson wants to build tech that works for everyone
ResearchWhat is prompt-tuning?
NewsSaška Mojsilović wants to channel AI for good. She may also make you rethink sour cabbage
NewsDebugging foundation models for bias
ResearchWhat is human-centered AI?
ExplainerNew research helps make AI fairer in decision-making
Research- See more of our work on Fairness, Accountability, Transparency
Projects
AIMEE
- Human-Centered AI
- Fairness, Accountability, Transparency
- AI for Business Automation
Accelerator technologies
- Accelerated Discovery
- Human-Centered AI
- Neuro-symbolic AI
- Materials Discovery
- Fairness, Accountability, Transparency
Publications
- 2023
- AAAI 2023
- 2023
- AAAI 2023
- 2021
- NeurIPS 2021
- 2021
- INFORMS 2021
- 2021
- Biological Psychiatry
- 2021
- IJCAI 2021
Tools + code
AI Fairness 360
An open-source toolkit of metrics to check for unwanted bias in datasets and machine learning models, and state-of-the-art algorithms to mitigate such bias. Containing over 70 fairness metrics and 10 bias mitigation algorithms, it’s designed to turn fairness research into practical applications.
View project →AI FactSheets 360
Toolkit to create factsheets outlining the details about how an AI service operates, how it was trained and tested, its performance metrics, fairness and robustness checks, intended uses, maintenance, and other critical details.
View project →