Actively monitoring machine learning models during production operations helps ensure prediction quality and detection and remediation of unexpected or undesired conditions. In this paper, we describe (1) a framework for monitoring machine learning models and (2) its implementation for sup- ply chain applications. We use our implementation to study drift in model predictions and model performance on three real data sets. We compare hypothesis test and information theoretic approaches to drift detection using the Kolomogorov- Smirnov distance and Bhattacharrya coefficient. Results showed that model performance was stable over the evaluation period. Predictions showed statistically significant drifts; however, these changes were not linked to changes in model error.