AI Testing
We’re designing tools to help ensure that AI systems are trustworthy, reliable and can optimize business processes. We create tests to simulate real-life scenarios and localize the faults in AI systems. We’re working on automating testing, debugging, and repairing AI models across a wide range of scenarios.
Our work
An open-source toolkit for debugging AI models of all data types
Technical noteKevin Eykholt and Taesung Lee- Adversarial Robustness and Privacy
- AI Testing
- Data and AI Security
AI diffusion models can be tricked into generating manipulated images
NewsKim Martineau- AI
- AI Testing
- Data and AI Security
- Foundation Models
- Generative AI
- Security
DOFramework: A testing framework for decision optimization model learners
Technical noteOrit Davidovich- AI
- AI Testing
- Mathematical Sciences
Managing the risk in AI: Spotting the “unknown unknowns”
ResearchOrna Raz, Sam Ackerman, and Marcel Zalmanovici5 minute read- AI
- AI Testing
IBM researchers check AI bias with counterfactual text
ResearchInkit Padhi, Nishtha Madaan, Naveen Panwar, and Diptikalyan Saha5 minute read- AI Testing
- Fairness, Accountability, Transparency
Publications
- Kevin Eykholt
- Taesung Lee
- et al.
- 2023
- USENIX Security 2023
- Frank Libsch
- Hiroyuki Mori
- 2023
- ECTC 2023
- Samuel Ackerman
- Lincoln Alexander
- et al.
- 2023
- AI Magazine
- S. Ilker Birbil
- Donato Maragno
- et al.
- 2023
- AAAI 2023
- Nishtha Madaan
- Adithya Manjunatha
- et al.
- 2023
- IAAI 2023
- Francesco Fuggitti
- Tathagata Chakraborti
- 2023
- AAAI 2023