Robert Farrell, Rajarshi Das, et al.
AAAI-SS 2010
Large language models (LLMs) are increasingly deployed in collaborative and decision-making settings, raising questions about their capacity for cooperation and trust. In this paper, we investigate LLM behavior through the lens of game theory, focusing on the iterated prisoner’s dilemma (IPD) and trust games. We conduct tournaments with both small and large open-source LLMs, comparing their strategies against classic baselines and human play. Our findings show that larger LLMs tend to be more strategic and adaptive in IPD, whereas smaller models display more exploratory patterns, particularly in trust games. Notably, models that maximize rewards in IPD are not necessarily the most cooperative, and correlations with human behavior vary across model families. These results suggest that cooperative tendencies in LLMs are context-dependent and may transfer across games in nuanced ways, offering insights into the design of AI systems intended for human collaboration.
Robert Farrell, Rajarshi Das, et al.
AAAI-SS 2010
Chen-chia Chang, Wan-hsuan Lin, et al.
ICML 2025
Daniel Karl I. Weidele, Hendrik Strobelt, et al.
SysML 2019
Gang Liu, Michael Sun, et al.
ICLR 2025