Fairness, Accountability, Transparency
Biases can lead to systematic disadvantages for marginalized individuals and groups — and they can arise in any point in the AI development lifecycle. To increase the accountability of high-risk AI systems, we're developing technologies to increase their end-to-end transparency and fairness.
Our work
IBM’s Stacy Hobson wants to build tech that works for everyone
ResearchKim Martineau- AI
- Fairness, Accountability, Transparency
- Human-Centered AI
- Responsible Technology
What is prompt-tuning?
NewsKim Martineau- AI
- Computer Vision
- Fairness, Accountability, Transparency
- Foundation Models
- Machine Learning
- Natural Language Processing
Saška Mojsilović wants to channel AI for good. She may also make you rethink sour cabbage
NewsKim Martineau- AI Transparency
- Data and AI Security
- Explainable AI
- Fairness, Accountability, Transparency
- Machine Learning
Debugging foundation models for bias
ResearchKim Martineau- Computer Vision
- Explainable AI
- Fairness, Accountability, Transparency
- Foundation Models
- Natural Language Processing
What is human-centered AI?
ExplainerWerner Geyer, Justin Weisz, Claudio Santos Pinhanez, and Elizabeth Daly6 minute read- Fairness, Accountability, Transparency
- Human-Centered AI
- Trustworthy AI
New research helps make AI fairer in decision-making
ResearchMikhail Yurochkin and Yuekai Sun5 minute read- AI
- Fairness, Accountability, Transparency
- Trustworthy AI
- See more of our work on Fairness, Accountability, Transparency
Projects
Accelerator Technologies
Publications
- Shivaram Subramanian
- Wei Sun
- 2023
- INFORMS 2023
- Karan Bhanot
- Dennis Wei
- et al.
- 2023
- ESANN 2023
- 2023
- IJCAI 2023
- Girmaw Abebe Tadesse
- Celia Cintas
- et al.
- 2023
- npj Digital Medicine
- 2023
- ACS Fall 2023
- Karan Bhanot
- Ioana Baldini Soares
- et al.
- 2023
- AIES 2023
Tools + code
AI Fairness 360
An open-source toolkit of metrics to check for unwanted bias in datasets and machine learning models, and state-of-the-art algorithms to mitigate such bias. Containing over 70 fairness metrics and 10 bias mitigation algorithms, it’s designed to turn fairness research into practical applications.
View project →AI FactSheets 360
Toolkit to create factsheets outlining the details about how an AI service operates, how it was trained and tested, its performance metrics, fairness and robustness checks, intended uses, maintenance, and other critical details.
View project →