human-centered-AI.jpg

AIMEE

AI model explorer and editor tool.

Overview

IBM Research continues to explore the area of Interactive AI, creating AIMEE (AI Model Explorer and Editor tool), a tool which provides interactive methods to users to explore and make changes to decision boundaries of AI models, using editable rules.

Human-centered AI systems provide positive outcomes to the users by putting human abilities at the forefront rather than replacing them by automation. Creating trust between users and AI systems can be challenging, yet it is fundamental to this approach. Facilitating ‘model transparency’ , the way in which users can gain insight and interpret the impact of different features on model predictions, is a way to encourage adoption and trust by the users of the system.

We are excited to present our tool AIMEE (AI Model Explorer and Editor tool), which provides interactive methods to learn rules, (transparent machine learning models) from historical data, allowing non-expert users to understand binary classification models and make changes in a simple, explainable way.

Our research team collaborated with IBM’s Chief Analytics Office, a division within IBM dedicated to enhancing IBM’s business performance and solving complex strategic issues, over a period of several years to build machine learning models to solve real-world business problems. During this time the team came to realise the importance of allowing non-ML practitioners to truly understand the decision-making criteria of machine learning models. This empowers business users to trust the models and update and correct them to align with updated policies without requiring new training data. Rules became a user-friendly mechanism to support user-led changes which gave users a much better understanding what to expect when making changes. Our use of rule-based explainability methods then also brought us to use cases where end users may want to use the rules as the model (a model surrogate composed using Boolean rules), in domains where transparency and governance is key such as finance. 

Using interpretable rule-based model approximation for greater transparency

Building on our experiences and what we learned from that collaboration, we developed AIMEE (AI Model Explorer and Editor tool). AIMEE uses a low-code, rules based approach to create an interpretable rule-based model as an approximation of the ML model, or model surrogate. The model surrogate can become a replacement for the original model, or can be simply used to give the the end user a better understanding of the decision-making criteria their ML model has learnt. Users can understand the current behaviour of their model, make comparisons so that they know what they must change if they want to influence the behaviour of the model, and communicate these changes to stakeholders. This is achieved through three main features: rule-based model extraction where rules representing the decision making criteria of the model are learnt from data, visualization of the model’s decision boundaries, and rule editing. We also use rules to support human-interpretable comparisons of model changes so that users see their feedback reflected in the updated model, aiming to characterise exactly what behaviour has changed between model versions in an understandable way, providing more context than just providing summary statistics such as model accuracy.

These methods are applicable for a wide range of Business Automation tasks, particularly those where the business context changes more frequently than the underlying data used to train ML models, for example changes in policy or business rules. Human-in-the-loop mechanisms allow for model changes to be made transparently.  Consider a mortgage approval task that screens applicants in an automated fashion using a machine learning model. If a regulatory change that relates applicant income to loan amounts has been implemented, the model will not directly reflect this change, as no training data exists to reflect this constraint. A system such as AIMEE allows business analysts to enter such a change and have an updated model accurately leverage data on past decisions and of current conditions.

These methodological advances were presented at top AI conferences last year (AAAI '21, IJCAI '21, NeurIPS demo '21). We conducted an in-depth user study with 25 participants to validate the approach. Our findings showed that participants intuitively understood rules as a way to represent the ML model and also were were able to correctly and effectively create rules and modify model decision boundaries. Furthermore, when tasked with reporting their modifications to outside stakeholders, we found that the transparency of rules facilitated the participants’ ability to communicate the ML model and user-provided changes, allowing them to find common ground with stakeholders through knowledge alignment. Participants found that rules helped collaborate, made models explainable and saved time, validating the benefits of a rule-based approach in giving control to end users and facilitating communication to stake holders.

Future work and challenges

Our team continues our work in this area, exploring how to best mitigate trade-offs between model accuracy and human-interpretability, and different methods of measuring interpretability. We also are interested to better understand and measure user’s perception of trust and confidence in the system and understand the impact of their changes.

Find out more

AIMEE was launched in June with the support of the Automation Decision Services (ADS) team as an experimental technical preview companion piece to ADS, part of IBM Cloud Pak for Business Automation. This feature supports important new functionality in the ADS decision pipeline, allowing user to use AIMEE to import data, learn rules and then export ruleset as an executable model (PMML) into ADS or IBM Decision Manager Open Edition (formerly called RedHat Decision Manager). Essential to this collaboration have been the IBM Research Paris team who are focused on rule induction algorithms, providing multi-algorithmic support for AIMEE.

Publications

Contributors

Related projects

AutoAI for text

Scaling AI technologies for NLP and text data to a large variety of users.