Machine learning offers promising capabilities to improve administrative procedures. At the same time, adequate training of models using traditional learning techniques requires the collection and storage of enough training data in a central place. Unfortunately, due to legislative and jurisdictional constraints, data in a central place is scarce and training a model becomes unfeasible. Against this backdrop, federated machine learning, a technique to collaboratively train models without transferring data to a centralized location, has been recently proposed. With each government entity keeping their data private, new applications that previously were impossible now can be a reality. In this paper, we demonstrate that accountability for the federated machine learning process becomes paramount to fully overcoming legislative and jurisdictional constraints. In particular, it ensures that all government entities' data are adequately included in the model and that evidence on fairness and reproducibility is curated towards trustworthiness. We also present an analysis framework suitable for governmental scenarios and illustrate its exemplary application for online citizen participation scenarios. We discuss our findings in terms of engineering and management implications: feasibility evaluation, general architecture, involved actors as well as verifiable claims for trustworthy machine learning.