Fairness, Accountability, Transparency
Biases can lead to systematic disadvantages for marginalized individuals and groups — and they can arise in any point in the AI development lifecycle. To increase the accountability of high-risk AI systems, we're developing technologies to increase their end-to-end transparency and fairness.
Our work
IBM reaffirms its commitment to the Rome Call for AI ethics
NewsMike MurphyWhat is red teaming for generative AI?
ExplainerKim MartineauThe latest AI safety method is a throwback to our maritime past
ResearchKim MartineauWhat is AI alignment?
ExplainerKim MartineauIBM’s Stacy Hobson wants to build tech that works for everyone
ResearchKim MartineauWhat is prompt-tuning?
NewsKim Martineau- See more of our work on Fairness, Accountability, Transparency
Projects
Accelerator Technologies
We're developing technological solutions to assist subject matter experts with their scientific workflows by enabling the Human-AI co-creation process.
Publications
- Hannah Powers
- Ioana Baldini Soares
- et al.
- 2024
- NeurIPS 2024
- Qinyi Chen
- Jason Cheuk Nam Liang
- et al.
- 2024
- NeurIPS 2024
- Ivoline Ngong
- Swanand Ravindra Kadhe
- et al.
- 2024
- NeurIPS 2024
- Assala Benmalek
- Celia Cintas
- et al.
- 2024
- MICCAI 2024
- Brooklyn Sheppard
- Anna Richter
- et al.
- 2024
- ACL 2024
- Victor Akinwande
- Megan Macgregor
- et al.
- 2024
- IJCAI 2024