Federated learning (FL) is the distributed machine learning framework that enables collaborative training across multiple parties while ensuring data privacy. Practical adaptation of XGBoost, the state-of-the-art tree boosting framework, to FL remains nascent due to high cost incurred by conventional privacy-preserving methods. Such limitations can be attributed to the lack of formal analytical model to enable new privacy methods well customized to federated XGBoost. To this end, we propose a novel formulation, termed splitting matrix, in the context of federated XGBoost that mathematically characterizes the role of passive party (PP) having been neglected in the literature. This new formulation facilitates our novel adoption of secure matrix multiplication protocol into federated XGBoost to propose FedXGBoost as a framework for secure XGBoost in federated setting with lossless accuracy and negligible overhead. Extensive experiments confirm our method's favorable performance.