An optimization-based approach to dynamic visual context management
Abstract
We are building an intelligent multimodal conversation system to aid users in exploring large and complex data sets. To tailor to diverse user queries introduced during a conversation, we automate the generation of system responses, including both spoken and visual outputs. In this paper, we focus on the problem of visual context management, a process that dynamically updates an existing visual display to effectively incorporate new information requested by subsequent user queries. Specifically, we develop an optimization-based approach to visual context management. Compared to existing approaches, which normally handle predictable visual context updates, our work offers two unique contributions. First, we provide a general computational framework that can effectively manage a visual context for diverse, unanticipated situations encountered in a user-system conversation. Moreover, we optimize the satisfaction of both semantic and visual constraints, which otherwise are difficult to balance using simple heuristics. Second, we present an extensible representation model that uses feature-based metrics to uniformly define all constraints. We have applied our work to two different applications and our evaluation has shown the promise of this work. © 2005 IEEE.