Adversarial Robustness and Privacy
Even advanced AI systems can be vulnerable to adversarial attacks. We’re making tools to protect AI and certify its robustness, including quantifying the vulnerability of neural networks and designing new attacks to make better defenses. And we’re helping AI systems adhere to privacy requirements.
Our work
An open-source toolkit for debugging AI models of all data types
Technical noteKevin Eykholt and Taesung Lee- Adversarial Robustness and Privacy
- AI Testing
- Data and AI Security
Did an AI write that? If so, which one? Introducing the new field of AI forensics
ExplainerKim Martineau- Adversarial Robustness and Privacy
- AI
- Explainable AI
- Foundation Models
- Generative AI
- Trustworthy AI
Manipulating stock prices with an adversarial tweet
ResearchKim Martineau- Adversarial Robustness and Privacy
- Trustworthy AI
Securing AI systems with adversarial robustness
Deep DivePin-Yu Chen8 minute read- Adversarial Robustness and Privacy
- AI
- Data and AI Security
Researchers develop defenses against deep learning hack attacks
ReleaseAmbrish Rawat, Killian Levacher, and Mathieu Sinn7 minute read- Adversarial Robustness and Privacy
- Data and AI Security
- Generative AI
- Security
- Trustworthy AI
AI goes anonymous during training to boost privacy protection
ReleaseAbigail Goldsteen4 minute read- Adversarial Robustness and Privacy
- Data and AI Security
- Machine Learning
- Security
- See more of our work on Adversarial Robustness and Privacy
Publications
- Hannah Kim
- Celia Cintas
- et al.
- 2023
- IJCAI 2023
- 2023
- IJCAI 2023
- Julia Hesse
- Nitin Singh
- et al.
- 2023
- USENIX Security 2023
- Sijia Liu
- Pin-Yu Chen
- et al.
- 2023
- ICML 2023
- Minhao Cheng
- Rui Min
- et al.
- 2023
- ICML 2023
- Sihui Dai
- Saeed Mahloujifar
- et al.
- 2023
- ICML 2023
Tools + code
ART: Adversarial Robustness Toolbox
A Python library for machine learning security that enables developers and researchers to defend and evaluate machine learning models and applications against the adversarial threats of evasion, poisoning, extraction, and inference.
View project →AI Privacy 360
Tools to support the assessment of privacy risks of AI-based solutions, and to help them adhere to any relevant privacy requirements. Tradeoffs between privacy, accuracy, and performance can be explored at different stages in the machine learning lifecycle.
View project →Diffprivlib: The IBM Differential Privacy Library
A general-purpose library for experimenting with, investigating, and developing applications in differential privacy.
View project →