Due to the rapid growth of data and computational resources, distributed optimization has become an active research area in recent years. While first- order methods seem to dominate the field, second- order methods are nevertheless attractive as they potentially require fewer communication rounds to converge. However, there are significant drawbacks that impede their wide adoption, such as the computation and the communication of a large Hessian matrix. In this paper we present a new algorithm for distributed training of generalized linear models that only requires the computation of diagonal blocks of the Hessian matrix on the individual workers. To deal with this approximate information we propose an adaptive approach that - Akin to trust-region methods - dynamically adapts the auxiliary model to compensate for modeling errors. We provide theoretical rates of convergence for a wide class of problems including Li- regularized objectives. We also demonstrate that our approach achieves state-of-the-art results on multiple large benchmark datasets.