Many machine learning applications call for methods to recommend when to update a deployed model if the underlying distribution of incoming data drifts over time. While necessary to generate accurate predictions towards intended business outcomes, updating a model in the form of re-training and maintaining model versions is a cost intensive operation. Current approaches in change detection and versioning do not cover mechanisms for automatically detecting significant change based on feature importance, correlation and semantic revisions occurring over time. We explore a modular approach towards model versioning focused on use-cases in the IT services industry. Our method attempts to detect and quantify changes across new and existing data sets based on statistical as well as semantic feature comparison. We demonstrate utility of our approach by implementing a change detection and model versioning service, and leveraging it for a risk analytic model built for a global IT service provider.