Publication
AAMAS 2009
Conference paper

Improving adjustable autonomy strategies for time-critical domains

Abstract

As agents begin to perform complex tasks alongside humans as collaborative teammates, it becomes crucial that the resulting human- multiagent teams adapt to time-critical domains. In such domains, adjustable autonomy has proven useful by allowing for a dynamic transfer of control of decision making between human and agents. However, existing adjustable autonomy algorithms commonly dis- cretize time, which not only results in high algorithm runtimes but also translates into inaccurate transfer of control policies. In addition. existing techniques fail to address decision making inconsistencies often encountered in human mulliagent decision making. To address these limitations, we present novel approach for Resolving Inconsistencies in Adjustable Autonomy in Continuous Time (RIA ACT) that makes three contributions: First, we apply continuous time planning paradigm to adjustable autonomy, resulting in high-accuracy transfer of control policies. Second, our new ad-justable autonomy framework both models and plans for the resolving of inconsistencies between human and agent decisions. Third, we introduce a new model, Interruptible Action Time-dependent Markov Decision Problem (1A-TMDP), which allows for actions to be interrupted at any point in continuous time. We show how to solve IA-TMDPs efficiently and leverage them to plan for the resolving of inconsistencies in RIAACT. Furthermore, these contributions have been realized and evaluated in a complex disaster response simulation system. Copyright © 2009, International Foundation for Autonomous Agents and Multiagent Systems.

Date

Publication

AAMAS 2009

Authors

Topics

Share