Predicting knowledge in an ontology stream
Freddy Lécué, Jeff Z. Pan
IJCAI 2013
This paper describes a general approach for automatically programming a behavior-based robot. New behaviors are learned by trial and error using a performance feedback function as reinforcement. Two algorithms for behavior learning are described that combine Q learning, a well-known scheme for propagating reinforcement values temporally across actions, with statistical clustering and Hamming distance, two ways of propagating reinforcement values spatially across states. A real behavior-based robot called OBELIX is described that learns several component behaviors in an example task involving pushing boxes. A simulator for the box pushing task is also used to gather data on the learning techniques. A detailed experimental study using the real robot and the simulator suggests two conclusions. 1. (1) The learning techniques are able to learn the individual behaviors, sometimes outperforming a handcoded program. 2. (2) Using a behavior-based architecture speeds up reinforcement learning by converting the problem of learning a complex task into that of learning a simpler set of special-purpose reactive subtasks. © 1992.
Freddy Lécué, Jeff Z. Pan
IJCAI 2013
Miao Guo, Yong Tao Pei, et al.
WCITS 2011
Conrad Albrecht, Jannik Schneider, et al.
CVPR 2025
Vladimir Yanovski, Israel A. Wagner, et al.
Ann. Math. Artif. Intell.