Publication
AAMAS 2019
Conference paper

Collaborative Reinforcement Learning Model for Sustainability of Cooperation in Sequential Social Dilemmas

View publication

Abstract

Learning the emergence of cooperation in conflicting scenarios such as social dilemmas is a centerpiece of research. Many reinforcement learning based theories exist in the literature to address this problem. The well-known fact about RL based model's very slow learning capabilities coupled with large state space exhibit significant negative effects especially in repeated version of social dilemma settings such as repeated Public Goods Game (PGG) and thereby making them ineffective to model sustainability of cooperation. In this paper, we address this research challenge by augmenting the reinforcement learning based models with a notion of collaboration among the agents, motivated by the fact that humans learn not only through their own actions but also by following the actions of other agents who also continuously learn about the environment. In particular, we propose a novel model, which we refer to as Collaborative Reinforcement Learning (CRL), wherein we define collaboration among the agents as the ability of agents to fully follow other agent's actions/decisions. This is also termed as social learning. The proposed CRL model significantly influences the speed of individual learning, which eventually has a large effect on the collective behavior as compared to that of RL only models and thereby effectively explaining the sustainability of cooperation in repeated PGG settings. We also extend the CRL model for PGGs over different generations where agents die out and new agents are born following a birth-death process.