Publication
Computers & Security
Paper

SparSFA: Towards robust and communication-efficient peer-to-peer federated learning

View publication

Abstract

Federated Learning (FL) has emerged as a powerful paradigm to train collaborative machine learning (ML) models, preserving the privacy of the participants’ datasets. However, standard FL approaches present some limitations that can hinder their applicability in some applications. Thus, the need of a server or aggregator to orchestrate the learning process may not be possible in scenarios with limited connectivity, as in some IoT applications, and offer less flexibility to personalize the ML models for the different participants. To sidestep these limitations, peer-to-peer FL (P2PFL) provides more flexibility, allowing participants to train their own models in collaboration with their neighbors. However, given the huge number of parameters of typical Deep Neural Network architectures, the communication burden can also be very high. On the other side, it has been shown that standard aggregation schemes for FL are very brittle against data and model poisoning attacks. In this paper, we propose SparSFA, an algorithm for P2PFL capable of reducing the communication costs. We show that our method outperforms competing sparsification methods in P2P scenarios, speeding the convergence and enhancing the stability during training. SparSFA also includes a mechanism to mitigate poisoning attacks for each participant in any random network topology. Our empirical evaluation on real datasets for intrusion detection in IoT, considering both balanced and imbalanced-dataset scenarios, shows that SparSFA is robust to different indiscriminate poisoning attacks launched by one or multiple adversaries, outperforming other robust aggregation methods whilst reducing the communication costs through sparsification.