Publication
Int. J. Hum. Comput. Stud.
Paper

Predicting the need for XAI from high-granularity interaction data

View publication

Abstract

Recent advances in Artificial Intelligence (AI) and Machine Learning (ML) brought light on the need for explainability in multiple domains (e.g., healthcare, finance, justice, and recruiting). Explainability or Explainable AI (XAI) can be defined as everything that makes AI more understandable to human beings. However, XAI features may vary according to the AI algorithm used. Beyond XAI features, different AI algorithms vary in terms of speed, performance, and costs associated with training/running models. Knowing when to choose the right algorithm for the task at hand, therefore, is fundamental in multiple AI systems, for instance, AutoML and AutoAI. In this paper, we propose a method to analyze patterns of high-granularity user interface (UI) events (i.e., mouse, keyboard, and additional custom events triggered on the millisecond scale) to predict when users will interact with UI elements that provide explainability for the AI in place. In this context, this paper presents: (1) a user study involving 37 participants (7 in the pilot phase and 30 in the main experiment phase) in which people performed a task of reporting a bug using a text form associated with an AI data quality meter and its XAI UI element and (2) an approach to model micro behavior using node2vec to predict when the interaction with XAI UI element will occur. The proposed approach uses a rich dataset (approximately 129k events) and combines node2vec and a Logistic Regression classifier. Results obtained show we have obtained an event-by-event prediction of the interaction with XAI with an average F-score of 0.90 (σ=0.06). From the presented results, one expects to support researchers in the realm of UI personalization to consider high-granularity interaction data when predicting the need for XAI while users are interacting with AI model outputs.