26 Jan 2021
Release
4 minute read

AI goes anonymous during training to boost privacy protection

Privacy is vital – even more so in the modern era of AI. But AI trained on personal data can be hacked.

Even if the hacker doesn’t access the training data, there’s still a risk of leaking sensitive information from the models themselves. For example, it may be possible to reveal if someone’s data is part of the model’s training set, and even infer sensitive attributes about the person, such as salary.

We’ve tried to address this privacy issue in our latest work.

IBM Research AI Privacy and Compliance Toolkit

Our team of researchers from IBM Haifa and Dublin has developed software to help assess privacy risk of AI as well as reduce the amount of personal data in AI training. This software could be of use for fintech, healthcare, insurance, security – or any other industry relying on sensitive data for training.

Using our software, we created AI models that are privacy-preserving and compliant.

Training with differential privacy

Consider a bank training AI to predict the type of customers most likely to default on mortgage payments. The AI has to comply with restrictions and obligations attached to processing personal data, so it wouldn’t be possible to share the model with other banks because of privacy concerns.

Differential Privacy (DP) could help. Applied during the training process, DP could limit the effect of anyone’s data on the model’s output. It gives robust, mathematical privacy guarantees against potential attacks on a user, while still delivering accurate population statistics. DP comes with Diffprivlib, a general-purpose library that provides generic tools for data analysis and implementations of machine learning models with DP.

However, DP excels only when there’s just one or a few models to train. That’s because it’s necessary to apply a different method for each specific model type and architecture, making this tool tricky to use in large organizations with a lot of different models.

Accuracy-guided anonymization

That’s where anonymization can be handy – applied to the data before training the model.

Anonymization applies generalizations to the data, making the records similar to one another by blurring their specific values so they are no longer unique. For example, instead of having a person’s age listed as 34 years old, they can be listed as between 30 and 40 years of age.

But traditional anonymization algorithms don’t consider the specific analysis the data is being used for. What if a 10-year range of ages is too general for an organization’s needs? After all, a 12-year-old is very different from a 21-year-old when it comes to, say, taking medication. When these anonymization techniques are applied in the context of machine learning, they tend to significantly degrade the model’s accuracy.

Our solution: The Machine Learning Model Anonymization tool.

This technology anonymizes machine learning models while being guided by the model itself. We customize the data generalizations, optimizing them for the model’s specific analysis – resulting in an anonymized model with higher accuracy. The method is agnostic to the specific learning algorithm and can be easily applied to any machine learning model, making it easy to integrate into existing MLOps pipelines.

Machine Learning model anonymization tool

The process starts with a trained machine learning model and training data, and the desired k value and the list of quasi-identifiers as input. The privacy parameter k determines how many records will be indistinguishable from each other in the dataset. For example, a k value of 100 means that every sample in the training set will look identical to 99 others. The quasi-identifiers are features that can be used to re-identify individuals, either on their own or in combination with additional data.

representational diagram of the anonymization algorithm on two datasetsThe results of training a machine learning classifier after applying our anonymization algorithm (the blue line, marked AG) compared with a few typical anonymization algorithms (Median Mondrian, Hilbert-curve and R+ tree) on two different datasets. The graphs show the effect of increasing the privacy parameter k on the model’s accuracy.

The software then creates an anonymized version of the training data, later used to retrain the model — resulting in an anonymized version of the model free from any data processing restrictions. This makes the model less prone to inference attacks, as we show in our paper, Anonymizing Machine Learning Models.1

Having tested our technology on publicly available datasets, we’ve obtained promising results. With relatively high values of k and large sets of quasi-identifiers, we created anonymized machine learning models with very little accuracy loss. (See image)

Next, we aim to run our models on real-life data and to see if the results hold. And we plan to extend our method from just tabular data to different kinds of data, including images.

You can try out our open source toolkit on GitHub.

References

  1. Goldsteen, A., Ezov, G., Shmelkin, R., Moffie, M. & Farkash, A. Anonymizing Machine Learning Models. arXiv:2007.13086 [cs] (2021).