Efforts to reduce social bias in machine learning has increased in the past several years. As data privacy concerns grow, finding techniques to train private, debiased machine learning models becomes increasingly important. Federated Learning (FL) has emerged as a popular privacy-preserving machine learning strategy. FL, however, by not providing complete access to training data, brings with it a unique set of difficulties in bias mitigation that have yet to be explored. In this paper, we delve into these difficulties, and how they can affect bias measured in federated learning models.