Publication
ESORICS 2021
Workshop paper

Anonymizing Machine Learning Models

View code

Abstract

There is a known tension between the need to analyze personal data to drive business and privacy concerns. Many data protection regulations, including the EU General Data Protection Regulation (GDPR) and the California Consumer Protection Act (CCPA), set out strict restrictions and obligations on companies that collect or process personal data. Moreover, machine learning models them- selves can be used to derive personal information, as demonstrated by recent membership and attribute inference attacks. Anonymized data, however, is exempt from data protection principles and obligations. Thus, models built on anonymized data are also exempt from any privacy obligations, in addition to providing better protection against such attacks on the training data. Learning on anonymized data typically results in a significant degradation in accuracy. Our goal in this work is to anonymize the data in a way that minimizes the impact on the model accuracy. We address this challenge by guiding our anonymization using the knowledge encoded within the model, and targeting it to minimize the impact on the model's accuracy, a process we call accuracy- guided anonymization. We demonstrate that by focusing on the model's accuracy rather than information loss, our method outperforms state of the art k-anonymity methods in terms of the achieved utility, in particular with high values of k and large numbers of quasi-identifiers. We also demonstrate that our approach achieves similar results in its ability to prevent membership inference attacks as alternative approaches based on differential privacy, and in some cases even better results, while being much less complex and resource-intensive. This shows that model-guided anonymization can, in some cases, be a legitimate substitute for such methods, while averting some of their inherent drawbacks such as complexity, performance over- head and being fitted to specific model types. We also show that our method is able to defend against other classes of attacks such as attribute inference. As opposed to methods that rely on adding noise during training, our approach does not rely on making any modifications to the training algorithm itself, and can work even with "black-box" models where the data owner has no control over the training process. As such, it can be applied in a wide variety of use cases, including ML-as-a-service.