In real-world applications when deploying ML models, initial model development includes close analysis of the model results and behaviour by a data scientist. Once trained however, models may need to be retrained with new data. Additionally, the models may need to be updated to adhere to new rules or regulations. This presents two challenges. Firstly, how to communicate model changes after retraining in particular when the role of the data scientist may be less in depth than originally and secondly how to support model editing to take into account new business logic. AIMEE (AI Model Explorer and Editor tool) is a tool created to address these challenges providing interactive methods to edit rule sets, visualize changes to decision boundaries, and generate interpretable comparisons of model changes so that users see their feedback reflected in the updated model. We conducted an extensive user study with 25 participants to validate the approach. Our findings showed that participants were able to correctly and effectively create rules to modify decision boundaries. Furthermore, when tasked with reporting their modifications to outside stakeholders, we found that the transparency of rules facilitated their ability to communicate the ML model and user-provided changes.