Adversarial Robustness and Privacy
Even advanced AI systems can be vulnerable to adversarial attacks. We’re making tools to protect AI and certify its robustness, including quantifying the vulnerability of neural networks and designing new attacks to make better defenses. And we’re helping AI systems adhere to privacy requirements.
Our work
An open-source toolkit for debugging AI models of all data types
Technical noteKevin Eykholt and Taesung Lee- Adversarial Robustness and Privacy
- AI Testing
- Data and AI Security
Did an AI write that? If so, which one? Introducing the new field of AI forensics
ExplainerKim Martineau- Adversarial Robustness and Privacy
- AI
- Explainable AI
- Foundation Models
- Generative AI
- Trustworthy AI
Manipulating stock prices with an adversarial tweet
ResearchKim Martineau- Adversarial Robustness and Privacy
- Trustworthy AI
Securing AI systems with adversarial robustness
Deep DivePin-Yu Chen8 minute read- Adversarial Robustness and Privacy
- AI
- Data and AI Security
Researchers develop defenses against deep learning hack attacks
ReleaseAmbrish Rawat, Killian Levacher, and Mathieu Sinn7 minute read- Adversarial Robustness and Privacy
- Data and AI Security
- Generative AI
- Security
- Trustworthy AI
AI goes anonymous during training to boost privacy protection
ReleaseAbigail Goldsteen4 minute read- Adversarial Robustness and Privacy
- Data and AI Security
- Machine Learning
- Security
- See more of our work on Adversarial Robustness and Privacy
Publications
- Chulin Xie
- Pin-Yu Chen
- et al.
- 2024
- IEEE SaTML 2024
- Shengwei An
- Sheng-yen Chou
- et al.
- 2024
- AAAI 2024
- Janvi Thakkar
- Giulio Zizzo
- et al.
- 2024
- AAAI 2024
- Janvi Thakkar
- Giulio Zizzo
- et al.
- 2024
- AAAI 2024
- Shuli Jiang
- Swanand Ravindra Kadhe
- et al.
- 2023
- NeurIPS 2023
- Sheng-yen Cho
- Pin-Yu Chen
- et al.
- 2023
- NeurIPS 2023