Publication
RECOMB 2024
Poster

Privacy Preserving Feature Selection for Sparse Linear Regression on Gene Expression Data

Abstract

Privacy-Preserving Machine Learning (PPML) provides protocols for learning and statistical analysis of data that may be distributed amongst multiple data owners (e.g., hospitals that own proprietary healthcare data), while preserving data privacy. The PPML literature includes protocols for various learning methods, including ridge regression. Ridge regression controls the L2 norm of the model, but does not aim to strictly reduce the number of non-zero coefficients, namely the L0 norm of the model. Reducing the number of non-zero coefficients (a form of feature selection) is important for avoiding overfitting, and for reducing the cost of using learnt models in practice. In this work, we develop a first privacy-preserving protocol for sparse linear regression under L0 constraints. The protocol addresses data contributed by several data owners (e.g., hospitals). Our protocol outsources the bulk of the computation to two non-colluding servers, using homomorphic encryption as a central tool. We provide a rigorous security proof for our protocol, where security is against semi-honest adversaries controlling any number of data owners and at most one server. We implemented our protocol, and evaluated performance with nearly a million samples and up to 40 features.

Date

Publication

RECOMB 2024

Share