About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Publication
ICTAI 2022
Short paper
A game theoretic approach to curriculum reinforcement learning
Abstract
Current reinforcement learning automated curriculum approaches continual learning by updating the environment. The update is often treated as an optimisation problem - with the teacher agent updating the environment to optimise the student’s learning. This work proposes an alternative framing of the problem using a game-theoretic formulation. The learning is defined by a leader - follower cooperative game. This formulation provides an approach for multi-agent curriculum learning that improves agent learning and provides more game equilibrium insights. We observed that under this framework, the agents converge faster to perform on the desired outcomes, compared to the reinforcement learning agent baseline.