Explainable AI
To trust AI systems, explanations can go a long way. We’re creating tools to help debug AI, where systems can explain what they’re doing. This includes training highly optimized, directly interpretable models, as well as explanations of black-box models and visualizations of neural network information flows.
Our work
IBM and RPI researchers demystify in-context learning in large language models
NewsPeter HessThe latest AI safety method is a throwback to our maritime past
ResearchKim MartineauFind and fix IT glitches before they crash the system
NewsKim MartineauWhat is retrieval-augmented generation?
ExplainerKim MartineauDid an AI write that? If so, which one? Introducing the new field of AI forensics
ExplainerKim MartineauSaška Mojsilović wants to channel AI for good. She may also make you rethink sour cabbage
NewsKim Martineau- See more of our work on Explainable AI
Publications
- Debarun Bhattacharjya
- Junkyu Lee
- et al.
- 2024
- IJCAI 2024
- Amit Dhurandhar
- Swagatam Haldar
- et al.
- 2024
- ICML 2024
- Yuya Jeremy Ong
- Jay Pankaj Gala
- et al.
- 2024
- IEEE CISOSE 2024
- Abdullah Caglar Oksuz
- Anisa Halimi
- et al.
- 2024
- PETS 2024
- 2024
- SSE 2024
- 2024
- IGARSS 2024