About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Abstract
In Federated Learning (FL) a significant body of research has focused on defending against malicious clients. However, clients are not the only party that can behave maliciously. The aggregator itself may tamper the model to bias it towards certain outputs, or adapt the weights to aid in reconstructing a client's private data. In this work we tackle the open problem of efficient verification of the computations performed by the aggregator in FL. We develop a novel protocol which through using binding commitments prevents an aggregator from modifying the resulting model, and only permits the aggregator to sum the supplied weights. We provide proof of correctness for our protocol demonstrating that any tampering by an aggregator will be detected. Additionally, we evaluate our protocol's overheads on three datasets, and show that even for large neural networks with millions of parameters the commitments can be computed in under 20 seconds.