Adversarial_Robustness_360.png

Adversarial Robustness Toolbox

Securing AI models with the Adversarial Robustness Toolbox

Overview

The Adversarial Robustness Toolbox (ART) is an open-source project, started by IBM, for machine learning security and has recently been donated to the Linux Foundation for AI (LFAI) by IBM as part of the Trustworthy AI tools. ART focuses on the threats of Evasion (change the model behavior with input modifications), Poisoning (control a model with training data modifications), Extraction (steal a model through queries) and Inference (attack the privacy of the training data). ART aims to support all popular ML frameworks, tasks, and data types and is under continuous development, lead by our team, to support both internal and external researchers and developers in defending AI against adversarial attacks and making AI systems more secure.

art1.jpg
Adversarial threats against machine learning covered by ART
art2.jpg
The AI Red and Blue Team approach of ART

Learn about ART:

  • Available on GitHub
  • See the latest enhancements here

Meet the developers of ART: