Publication
K-CAP 2007
Conference paper

Capturing a taxonomy of failures during automatic interpretation of questions posed in natural language

View publication

Abstract

An important problem in artificial intelligence is capturing, from natural language, formal representations that can be used by a reasoner to compute an answer. Many researchers have studied this problem by developing algorithms addressing specific phenomena in natural language interpretation, but few have studied (or cataloged) the types of failures associated with this problem. Knowledge of these failures can help researchers by providing a road map of open research problems and help practitioners by providing a checklist of issues to address in order to build systems that can achieve good performance on this problem. In this paper, we present a study - conducted in the context of the Halo Project -cataloging the types of failures that occur when capturing knowledge from natural language. We identified the categories of failures by examining a corpus of questions posed by naïve users to a knowledge based question answering system and empirically demonstrated the generality of our categorizations. We also describe available technologies that can address some of the failures we have identified. Copyright 2007 ACM.

Date

Publication

K-CAP 2007

Authors

Share