FLASH: AUTOMATING FEDERATED LEARNING USING CASH
Abstract
We present FLASH, a framework which addresses for the first time the central AutoML problem of Combined Algorithm Selection and HyperParameter Optimization (CASH) in the context of Fed- erated Learning (FL). To limit training cost, FLASH incrementally adapts the set of algorithms to train based on their projected loss rates, while supporting decentralized (federated) implementation of the embedded hyper-parameter optimization (HPO), model selection and loss calculation problems. We provide a theoretical analysis of the training and validation loss under FLASH, and their tradeoff with the the training cost measured as the data wasted in training sub- optimal algorithms. Through extensive experimental investigation on several datasets, we evaluate three variants of FLASH, and show that FLASH performs close to centralized CASH methods.