Publication
ICML 2019
Conference paper

Analyzing federated learning through an adversarial lens

Abstract

Federated learning distributes model training among a multitude of agents, who, guided by privacy concerns, perform training using their local data but share only model parameter updates, for iterative aggregation at the server to train an overall global model. In this work, we explore how the federated learning setting gives rise to a new threat, namely model poisoning, different from traditional data poisoning. Model poisoning is carried out by an adversary controlling a small number of malicious agents (usually 1) with the aim of causing the global model to mis-classify a set of chosen inputs with high confidence. We explore a number of attack strategies for deep neural networks, starting with targeted model poisoning using boosting of the malicious agent's update to overcome the effects of other agents. We also propose two critical notions of stealth to detect malicious updates. We bypass these by including them in the adversarial objective to carry out stealthy model poisoning. We improve attack stealth with the use of an alternating minimization strategy which alternately optimizes for stealth and the adversarial objective. We also empirically demonstrate that Byzantine-resilient aggregation strategies are not robust to our attacks. Our results show that effective and stealthy model poisoning attacks arc possible, highlighting vulnerabilities in the federated learning setting.

Date

09 Jun 2019

Publication

ICML 2019

Authors

Share