Publication
AMEPSRC 2014
Conference paper

Joint Navigation in Commander/Robot Teams: Dialog & Task Performance When Vision is Bandwidth-Limited

Abstract

The prospect of human commanders teaming with mobile robots "smart enough" to undertake joint exploratory tasks-especially tasks that neither commander nor robot could perform alone-requires novel methods of preparing and testing human-robot teams for these ventures prior to real-time operations. In this paper, we report work-in-progress that maintains face validity of selected configurations of resources and people, as would be available in emergency circumstances. More specifically, from an off-site post, we ask human commanders (C) to perform an exploratory task in collaboration with a remotely located human robot-navigator (Rn) who controls the navigation of, but cannot see the physical robot (R). We impose network bandwidth restrictions in two mission scenarios comparable to real circumstances by varying the availability of sensor, image, and video signals to Rn, in effect limiting the human Rn to function as an automation stand-in. To better understand the capabilities and language required in such configurations, we constructed multi-modal corpora of time-synced dialog, video, and LIDAR files recorded during task sessions. We can now examine commander/robot dialogs while replaying what C and Rn saw, to assess their task performance under these varied conditions.

Date

23 Aug 2014

Publication

AMEPSRC 2014

Share