Counterexample to theorems of Cox and Fine
Joseph Y. Halpern
aaai 1996
In this paper, we propose a novel adaptive step-size approach for policy gradient reinforcement learning. A new metric is defined for policy gradients that measures the effect of changes on average reward with respect to the policy parameters. Since the metric directly measures the effects on the average reward, the resulting policy gradient learning employs an adaptive step-size strategy that can effectively avoid falling into a stagnant phase from the complex structure of the average reward function with respect to the policy parameters. Two algorithms are derived with the metric as variants of ordinary and natural policy gradients. Their properties are compared with previously proposed policy gradients through numerical experiments with simple, but non-trivial, 3-state Markov Decision Processes (MDPs). We also show performance improvements over previous methods in on-line learning with more challenging 20-state MDPs. © 2010 Takamitsu Matsubara, Tetsuro Morimura, and Jun Morimoto.
Joseph Y. Halpern
aaai 1996
Bemali Wickramanayake, Zhipeng He, et al.
Knowledge-Based Systems
John R. Kender, Rick Kjeldsen
IEEE Transactions on Pattern Analysis and Machine Intelligence
Guojing Cong, David A. Bader
Journal of Parallel and Distributed Computing