Publication
NeurIPS 2019
Conference paper

A family of robust stochastic operators for reinforcement learning

Abstract

We consider a new family of stochastic operators for reinforcement learning that seeks to alleviate negative effects and become more robust to approximation or estimation errors. Theoretical results are established, showing that our family of operators preserve optimality and increase the action gap in a stochastic sense. Empirical results illustrate the strong benefits of our robust stochastic operators, significantly outperforming the classical Bellman and recently proposed operators.

Date

08 Dec 2019

Publication

NeurIPS 2019

Authors

Share