About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Publication
CHI 2024
Paper
The Who in XAI: How AI Background Shapes Perceptions of AI Explanations
Abstract
Explainability of AI systems is critical for users to take informed actions. Understanding who "opens" with the black-box of AI is just as important as opening it. We conduct a mixed-methods study of how two different groups—people with and without AI background—perceive different types of AI explanations. Quantitatively, we share user perceptions along five dimensions. Qualitatively, we describe how AI background influences interpretations, elucidating the differences through lenses of appropriation and cognitive heuristics. We find that (1) both groups showed unwarranted faith in numbers for different reasons, (2) each group found value in different explanations beyond their intended design and (3) each group had different requirements for humanlike explanations. Carrying critical implications on XAI as a field, our findings showcase how AI generated explanations can have negative consequences despite best intentions and how that could lead to harmful manipulation of trust. We propose design interventions to mitigate them.