Hironori Takeuchi, Tetsuya Nasukawa, et al.
Transactions of the Japanese Society for Artificial Intelligence
Ensuring group fairness in federated learning (FL) presents unique challenges due to data heterogeneity and communication constraints. We propose Kernel Fair Federated Learning (KFFL), a novel algorithmic framework that incorporates group fairness into FL models using the Kernel Hilbert-Schmidt Independence Criterion (KHSIC) as a fairness regularizer. To address scalability, KFFL approximates the KHSIC with random feature maps, significantly reducing computational and communication overhead while achieving group fairness. To address the resulting non-convex composite optimization problem, we propose FedProxGrad, a federated proximal gradient algorithm that guarantees convergence. Through experiments on standard benchmark datasets across both IID and Non-IID settings for regression and classification tasks, KFFL demonstrates its ability to balance accuracy and fairness effectively, outperforming existing methods by comprehensively exploring the accuracy–fairness trade-offs. Furthermore, we introduce KFFL-TD, a time-delayed variant that further reduces communication rounds, enhancing efficiency in decentralized environments. Code is available at github.com/Huzaifa-Arif/KFFL.
Hironori Takeuchi, Tetsuya Nasukawa, et al.
Transactions of the Japanese Society for Artificial Intelligence
Joxan Jaffar
Journal of the ACM
P.C. Yue, C.K. Wong
Journal of the ACM
Freddy Lécué, Jeff Z. Pan
IJCAI 2013