AI Testing
We’re designing tools to help ensure that AI systems are trustworthy, reliable and can optimize business processes. We create tests to simulate real-life scenarios and localize the faults in AI systems. We’re working on automating testing, debugging, and repairing AI models across a wide range of scenarios.
Our work
Tiny benchmarks for large language models
NewsKim MartineauWhat is red teaming for generative AI?
ExplainerKim MartineauAn open-source toolkit for debugging AI models of all data types
Technical noteKevin Eykholt and Taesung LeeAI diffusion models can be tricked into generating manipulated images
NewsKim MartineauDOFramework: A testing framework for decision optimization model learners
Technical noteOrit DavidovichManaging the risk in AI: Spotting the “unknown unknowns”
ResearchOrna Raz, Sam Ackerman, and Marcel Zalmanovici5 minute readIBM researchers check AI bias with counterfactual text
ResearchInkit Padhi, Nishtha Madaan, Naveen Panwar, and Diptikalyan Saha5 minute read
Publications
- 2024
- BPM 2024
- Abdelrahman Zayed
- Gonçalo Mordido
- et al.
- 2024
- ACL 2024
- Oscar Sainz
- Iker García-ferrero
- et al.
- 2024
- ACL 2024
- Apoorva Nitsure
- Youssef Mroueh
- et al.
- 2024
- ICML 2024
- Yuya Jeremy Ong
- Jay Pankaj Gala
- et al.
- 2024
- IEEE CISOSE 2024
- Leshem Choshen
- Ariel Gera
- et al.
- 2024
- LREC-COLING 2024