Adversarial Robustness and Privacy
Even advanced AI systems can be vulnerable to adversarial attacks. We’re making tools to protect AI and certify its robustness, including quantifying the vulnerability of neural networks and designing new attacks to make better defenses. And we’re helping AI systems adhere to privacy requirements.
Our work
What is red teaming for generative AI?
ExplainerKim Martineau- Adversarial Robustness and Privacy
- AI
- AI Testing
- Fairness, Accountability, Transparency
- Foundation Models
- Natural Language Processing
- Security
- Trustworthy AI
An open-source toolkit for debugging AI models of all data types
Technical noteKevin Eykholt and Taesung Lee- Adversarial Robustness and Privacy
- AI Testing
- Data and AI Security
Did an AI write that? If so, which one? Introducing the new field of AI forensics
ExplainerKim Martineau- Adversarial Robustness and Privacy
- AI
- Explainable AI
- Foundation Models
- Generative AI
- Trustworthy AI
Manipulating stock prices with an adversarial tweet
ResearchKim Martineau- Adversarial Robustness and Privacy
- Trustworthy AI
Securing AI systems with adversarial robustness
Deep DivePin-Yu Chen8 minute read- Adversarial Robustness and Privacy
- AI
- Data and AI Security
Researchers develop defenses against deep learning hack attacks
ReleaseAmbrish Rawat, Killian Levacher, and Mathieu Sinn7 minute read- Adversarial Robustness and Privacy
- Data and AI Security
- Generative AI
- Security
- Trustworthy AI
- See more of our work on Adversarial Robustness and Privacy
Publications
- Ming-yu Chung
- Sheng-yen Chou
- et al.
- 2024
- ICLR 2024
- Xiangyu Qi
- Yi Zeng
- et al.
- 2024
- ICLR 2024
- Zishen Wan
- Nandhini Chandramoorthy
- et al.
- 2024
- ASPLOS 2024
- Chulin Xie
- Pin-Yu Chen
- et al.
- 2024
- IEEE SaTML 2024
- Shengwei An
- Sheng-yen Chou
- et al.
- 2024
- AAAI 2024
- Janvi Thakkar
- Giulio Zizzo
- et al.
- 2024
- AAAI 2024