Publication
ICLR 2024
Conference paper

Sharpness-Aware Data Poisoning Attack

Abstract

Recent research has highlighted the vulnerability of Deep Neural Networks (DNNs) against data poisoning attacks. These attacks aim to inject poisoning samples into the models’ training dataset such that the trained models have inference failures. While previous studies have executed different types of attacks, one major challenge that greatly limits their effectiveness is the uncertainty of the re-training process after the injection of poisoning samples. It includes the uncertainty of training initialization, algorithm and model architecture. To address this challenge, we propose a new strategy called “Sharpness-Aware Data Poisoning Attack (SAPA)”. In particular, it leverages the concept of DNNs’ loss landscape sharpness to optimize the poisoning effect on the (approximately) worst re-trained model. Extensive experiments demonstrate that SAPA offers a general and principled strategy that significantly enhances numerous poisoning attacks against various types of re-training uncertainty.

Date

Publication

ICLR 2024