Publication
SIGMOD 2024
Workshop paper

Federated Fine-Tuning of LLMs on the Very Edge: The Good, the Bad, the Ugly

Abstract

With the emergence of AI regulations, such as the EU AI Act, requirements for simple data lineage, enforcement of low data bias, and energy efficiency have become a priority for everyone offering AI services. Being pre-trained on versatile and a vast amount of data, large language models and foundation models (FMs) offer a good basis for building high-quality deep learning pipelines. Fine-tuning can further improve model performance on a specific downstream task, which requires orders of magnitude less data than pre-training. Often, access to high-quality and low-bias data for model fine-tuning is limited due to technical or regulatory requirements. Federated learning (FL), as a distributed and privacy-preserving technique, offers a well-suited approach to significantly expanding data access for model fine-tuning. Yet, this data is often located on the network edge, where energy, computational, and communication resources are significantly more limited than in data centers. In our paper, we conduct an end-to-end evaluation for fine-tuning the FLAN-T5 FM family on the network edge. We study energy efficiency potentials throughout FL systems - on clients, in communication, and on the server. Our analysis introduces energy efficiency as a real-time metric to assess the computational efficiency of an FL system. We show the stark need for further improvements in communication efficiency when working with FMs and demonstrate the importance of adaptive FL optimizers for FM training.