Fairness, Accountability, Transparency
Biases can lead to systematic disadvantages for marginalized individuals and groups — and they can arise in any point in the AI development lifecycle. To increase the accountability of high-risk AI systems, we're developing technologies to increase their end-to-end transparency and fairness.
Our work
IBM reaffirms its commitment to the Rome Call for AI ethics
NewsMike MurphyWhat is red teaming for generative AI?
ExplainerKim MartineauThe latest AI safety method is a throwback to our maritime past
ResearchKim MartineauWhat is AI alignment?
ExplainerKim MartineauIBM’s Stacy Hobson wants to build tech that works for everyone
ResearchKim MartineauWhat is prompt-tuning?
NewsKim Martineau- See more of our work on Fairness, Accountability, Transparency
Projects
We're developing technological solutions to assist subject matter experts with their scientific workflows by enabling the Human-AI co-creation process.
Publications
Better Bias Benchmarking of Language Models via Multi-factor Analysis
- Hannah Powers
- Ioana Baldini Soares
- et al.
- 2024
- NeurIPS 2024
Interpolating Item and User Fairness in Multi-Sided Recommendations
- Qinyi Chen
- Jason Cheuk Nam Liang
- et al.
- 2024
- NeurIPS 2024
Protecting Users From Themselves: Safeguarding Contextual Privacy in Interactions with Conversational Agents
- Ivoline Ngong
- Swanand Ravindra Kadhe
- et al.
- 2024
- NeurIPS 2024
Impact of Skin Tone Diversity on Out-of-Distribution Detection Methods in Dermatology
- Assala Benmalek
- Celia Cintas
- et al.
- 2024
- MICCAI 2024
Biasly: An Expert-Annotated Dataset for Subtle Misogyny Detection and Mitigation
- Brooklyn Sheppard
- Anna Richter
- et al.
- 2024
- ACL 2024
Using Causal Inference to Investigate Contraceptive Discontinuation in Sub-Saharan Africa
- Victor Akinwande
- Megan Macgregor
- et al.
- 2024
- IJCAI 2024