Publication
CVPR 2021
Tutorial

Practical Adversarial Robustness in Deep Learning: Problems and Solutions

View publication

Abstract

Deep learning has brought us tremendous achievements in many fields such as computer vision, natural language processing. In spite of the impeccable success, modern deep learning systems are still prone to adversaries. Let's talk in terms of computer vision. Consider an image of a bagel (X). A Deep learning-based image classifier is able to successfully recognize X as a bagel. Now consider another instance of the same image which is a slightly perturbed version of X. To the human eyes, it would still be a bagel but for that same image classifier, it can be presented as a grand piano. The real-world implications of these in mission-critical systems including transportation, autonomous vehicles, and healthcare can be really adverse. The focus of this tutorial is not just to survey different attack types but also how to employ them in practice, to look at various state-of-the-art pre-trained models, optimizers and test their susceptibility to these attacks, and then to employ the latest and best techniques to prevent adversarial attacks, thanks to the principles in adversarial learning. This tutorial also aims to provide a holistic and complementary overview of how the same adversarial technique can be used in totally different manners, for good and (unintentionally) for bad, so that AI researchers and developers can have a fresh perspective and some reflection on the induced impacts and responsibility. To make some concrete examples, generative adversarial networks (GANs) are capable of generating photo-realistic synthetic images; but the same techniques can be repurposed into troublesome tools such as Deepfake. On the other hand, adversarial attacks causing prediction evasion are often related to a trouble maker or a security outbreak, but the same techniques are used for improving model robustness and for novel applications, such as adversarial training, privacy-enhanced training, data augmentation, watermarking, and integrity testing, to name a few.

Date

18 Jun 2021

Publication

CVPR 2021

Authors

Topics

Share