Explainable AI
To trust AI systems, explanations can go a long way. We’re creating tools to help debug AI, where systems can explain what they’re doing. This includes training highly optimized, directly interpretable models, as well as explanations of black-box models and visualizations of neural network information flows.
Our work
Tools + code
AI Explainability 360
This open source toolkit contains eight algorithms that help you comprehend how machine-learning models predict labels throughout the AI application lifecycle. It’s designed to translate algorithmic research into the real-world use cases in a range of files, such as finance, human capital management, healthcare, and education.
View project →ECQA Dataset
The open-source Explanations for CommonsenseQA (ECQA) dataset is a resource to teach AI systems how to reason about the correct and incorrect answers to everyday common-sensical questions.
View project →
Publications
- 2021
- INFORMS 2021
- 2021
- INFORMS 2021
- 2021
- INFORMS 2021
- 2021
- INFORMS 2021
- 2021
- INFORMS 2021
- 2021
- KDD 2021