Adversarial Robustness and Privacy
Even advanced AI systems can be vulnerable to adversarial attacks. We’re making tools to protect AI and certify its robustness, including quantifying the vulnerability of neural networks and designing new attacks to make better defenses. And we’re helping AI systems adhere to privacy requirements.
Our work
Manipulating stock prices with an adversarial tweet
ResearchSecuring AI systems with adversarial robustness
Deep DiveResearchers develop defenses against deep learning hack attacks
ReleaseAI goes anonymous during training to boost privacy protection
ReleasePreparing deep learning for the real world – on a wide scale
ResearchAdversarial Robustness Toolbox: One Year Later with v1.4
ReleaseIBM Differential Privacy Library: The single line of code that can protect your data
Release
Tools + code
ART: Adversarial Robustness Toolbox
A Python library for machine learning security that enables developers and researchers to defend and evaluate machine learning models and applications against the adversarial threats of evasion, poisoning, extraction, and inference.
View project →AI Privacy 360
Tools to support the assessment of privacy risks of AI-based solutions, and to help them adhere to any relevant privacy requirements. Tradeoffs between privacy, accuracy, and performance can be explored at different stages in the machine learning lifecycle.
View project →Diffprivlib: The IBM Differential Privacy Library
A general-purpose library for experimenting with, investigating, and developing applications in differential privacy.
View project →
Publications
- 2022
- Big Data 2022
- 2022
- NAACL 2022
- 2021
- CCS 2021
- 2021
- CCS 2021
- 2021
- INFORMS 2021
- 2021
- ESORICS 2021