ICML 1992
Conference paper

Enhancing Transfer in Reinforcement Learning by Building Stochastic Models of Robot Actions

View publication


Recent work has shown that reinforcement learning is a viable method of automatically programming behavior-based robots. However, one weakness with this approach is that the learning typically does not transfer across tasks. Furthermore, there is unnecessary duplication of learning effort because information is not shared among the various behaviors. This paper describes an alternative technique based on action models that attempts to maximize transfer within and across tasks. Action models are inferred using a statistical clustering technique from instances generated by a robot exploring its task environment. Task-specific knowledge is encoded using a reward function for each subtask. A multi-step lookahead strategy using the reward functions as static evaluators is employed to select the most appropriate action. Experiments on simulated and real robots show that useful action models can be learned from a 12 by 12 scrolling certainty grid sensor. Furthermore, on the simulator these models are sufficiently rich to enable significant transfer within and across two tasks, box pushing and wall following.


01 Jul 1992


ICML 1992