About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Publication
CHI 2024
Workshop paper
Expedient Assistance and Consequential Misunderstanding: Envisioning an Operationalized Mutual Theory of Mind
Abstract
Design fictions allow us to prototype the future. They enable us to interrogate emerging or non-existent technologies and examine their implications. We present three design fictions that probe the potential consequences of operationalizing a mutual theory of mind (MToM) between human users and one (or more) AI agents. We use these fictions to explore many aspects of MToM, including how models of the other party are shaped through interaction, how discrepancies between these models lead to breakdowns, and how models of a human's knowledge and skills enable AI agents to act in their stead. We examine these aspects through two lenses: a utopian lens in which MToM enhances human-human interactions and leads to synergistic human-AI collaborations, and a dystopian lens in which a faulty or misaligned MToM leads to problematic outcomes. Our work provides an aspirational vision for human-centered MToM research while simultaneously warning of the consequences when implemented incorrectly.