Publication
IEEE TPAMI
Paper

A(DP)^2SGD: Asynchronous Decentralized Parallel Stochastic Gradient Descent with Differential Privacy

View publication

Abstract

As deep learning models are usually massive and complex, distributed learning is essential for increasing training efficiency. Moreover, in many real-world application scenarios like healthcare, distributed learning can also keep the data local and protect privacy. Recently, the asynchronous decentralized parallel stochastic gradient descent (ADPSGD) algorithm has been proposed and demonstrated to be an efficient and practical strategy where there is no central server, so that each computing node onlycommunicates with its neighbors. Although no raw data will be transmitted across different local nodes, there is still a risk of informationleak during the communication process for malicious participants to make attacks. In this paper, we present a differentially privateversion of asynchronous decentralized parallel SGD framework, or A(DP)2SGD for short, which maintains communication efficiency ofADPSGD and prevents the inference from malicious participants. Specifically, R enyi differential privacy is used to provide tighterprivacy analysis for our composite Gaussian mechanisms while the convergence rate is consistent with the non-private version.Theoretical analysis shows A(DP)2SGD also converges at the optimalO(1/T)rate as SGD. Empirically, A(DP)2SGD achievescomparable model accuracy as the differentially private version of Synchronous SGD (SSGD) but runs much faster than SSGD inheterogeneous computing environments.

Date

01 Jan 2021

Publication

IEEE TPAMI

Authors

Share