Representing and Reasoning with Defaults for Learning Agents
Abstract
The challenge we address is to create autonomous, inductively learning agents that exploit and modify a knowledge base. Our general approach, embodied in a continuing research program (joint with Stuart Russell), is declarative bias, i.e., to use declarative knowledge to constrain the hypothesis space in inductive learning. In previous work, we have shown that many kinds of declarative bias can be relatively efficiently represented and derived from background knowledge. We begin by observing that the default, i.e., revisable, flavor of beliefs is crucial in applications, especially for competence to improve incrementally and for information to be acquired through communication, language, and sensory perception in integrated agents. We argue that much of learning in humans consists of "learning in the small" and is nothing more nor less than acquiring new plausible premise beliefs. Thus representation of defaults and plausible knowledge should be a central question for researchers aiming to design sophisticated learning agents that exploit a knowledge base. We show that such applications pose several representational requirements that are unfamiliar to most in the machine learning community, and whose combination has not been previously addressed by the knowledge representation community. These include: prioritization-type precedence between defaults; updating with new defaults, not just new for-sure beliefs; explicit reasoning about adoption of defaults and precedence between defaults; and integration of defaults with probabilistic and statistical beliefs. We show how, for the first time, to achieve all of these requirements, at least partially, in one declarative formalism: Defeasible Axiomatized Policy Circumscription, a generalized variant of circumscription.