Explainable AI
To trust AI systems, explanations can go a long way. We’re creating tools to help debug AI, where systems can explain what they’re doing. This includes training highly optimized, directly interpretable models, as well as explanations of black-box models and visualizations of neural network information flows.
Our work
The latest AI safety method is a throwback to our maritime past
ResearchKim Martineau- AI
- AI Transparency
- Explainable AI
- Fairness, Accountability, Transparency
- Generative AI
Find and fix IT glitches before they crash the system
NewsKim Martineau- AI for Code
- AI for IT
- Explainable AI
- Foundation Models
- Generative AI
What is retrieval-augmented generation?
ExplainerKim Martineau- AI
- Explainable AI
- Generative AI
- Natural Language Processing
- Trustworthy Generation
Did an AI write that? If so, which one? Introducing the new field of AI forensics
ExplainerKim Martineau- Adversarial Robustness and Privacy
- AI
- Explainable AI
- Foundation Models
- Generative AI
- Trustworthy AI
Saška Mojsilović wants to channel AI for good. She may also make you rethink sour cabbage
NewsKim Martineau- AI Transparency
- Data and AI Security
- Explainable AI
- Fairness, Accountability, Transparency
- Machine Learning
Debugging foundation models for bias
ResearchKim Martineau- Computer Vision
- Explainable AI
- Fairness, Accountability, Transparency
- Foundation Models
- Natural Language Processing
- See more of our work on Explainable AI
Publications
- Smit Desai
- Christina Wei
- et al.
- 2024
- CHI 2024
- Felix Petersen
- Aashwin Mishra
- et al.
- 2024
- ICLR 2024
- Georgia Perakis
- Wei Sun
- et al.
- 2024
- AISTATS 2024
- Simona Rabinovici-Cohen
- Neomi Fridman
- et al.
- 2024
- Cancers
- Francesco Fabiano
- Vishal Pallagani
- et al.
- 2024
- AAAI 2024
- Turgay Caglar
- Sirine Belhaj
- et al.
- 2024
- AAAI 2024