Publication
Patterns
Review

Rapid Trust Calibration through Interpretable and Uncertainty-Aware AI

View publication

Abstract

Artificial intelligence (AI) systems hold great promise as decision-support tools, but we must be able to identify and understand their inevitable mistakes if they are to fulfill this potential. This is particularly true in domains where the decisions are high-stakes, such as law, medicine, and the military. In this Perspective, we describe the particular challenges for AI decision support posed in military coalition operations. These include having to deal with limited, low-quality data, which inevitably compromises AI performance. We suggest that these problems can be mitigated by taking steps that allow rapid trust calibration so that decision makers understand the AI system's limitations and likely failures and can calibrate their trust in its outputs appropriately. We propose that AI services can achieve this by being both interpretable and uncertainty-aware. Creating such AI systems poses various technical and human factors challenges. We review these challenges and recommend directions for future research. This article is about artificial intelligence (AI) used to inform high-stakes decisions, such as those arising in legal, healthcare, or military contexts. Users must have an understanding of the capabilities and limitations of an AI system when making high-stakes decisions. Usually this requires the user to interact with the system and learn over time how it behaves in different circumstances. We propose that long-term interaction would not be necessary for an AI system with the properties of interpretability and uncertainty awareness. Interpretability makes clear what the system “knows” while uncertainty awareness reveals what the system does not “know.” This allows the user to rapidly calibrate their trust in the system's outputs, spotting flaws in its reasoning or seeing when it is unsure. We illustrate these concepts in the context of a military coalition operation, where decision makers may be using AI systems with which they are unfamiliar and which are operating in rapidly changing environments. We review current research in these areas, considering both technical and human factors challenges, and propose a framework for future work based on Lasswell's communication model. We introduce the concept of rapid trust calibration for AI decision support, and propose how this can be achieved by building AI systems that are both interpretable and uncertainty-aware. We provide a literature review of these research areas and describe a military scenario illustrating the relevant concepts. We propose a framework inspired by Lasswell's communication model to structure future work in this area.