In regression analysis, outliers in the data can induce a bias in the learned function, resulting in larger errors. In this paper we derive an empirically estimable bound on the regression error based on a Euclidean minimum spanning tree generated from the data. Using this bound as motivation, we propose an iterative approach to remove data with noisy responses from the training set. We evaluate the performance of the algorithm on experiments with real-world pathological speech (speech from individuals with neurogenic disorders). Comparative results show that removing noisy examples during training using the proposed approach yields better predictive performance on out-of-sample data.