AI is increasingly integral in many real world tasks and while ML tasks such a classification perform well with the appropriate historical data they require retraining to capture changing dynamics otherwise performance can degrade significantly. However retraining can be costly and more challenging still is obtaining labeled data that accurately reflects the current decision landscape. The advances in explainability has opened the possibility of allowing users to interact with interpretable explanations of ML predictions in order to inject modifications or constraints that more accurately reflect current realities of the system. In this paper, we present a solution which leverages the predictive power of ML models while allowing the user to specify modifications to decision boundaries. Our interactive overlay approach achieves this goal without requiring model retraining making it appropriate for systems that need to apply instant changes to their decision making. We demonstrate that user feedback rules can be layered with the ML predictions to provide immediate changes which in turn supports learning with less data.