Publication
ICASSP 2024
Conference paper

Leveraging Visual Handicaps for Text-Based Reinforcement Learning

View publication

Abstract

We introduce VisualHandicaps, a novel benchmark environment for the systematic analysis of interactive text-based reinforcement learning (TBRL) agents by providing visual handicaps. Unlike previous TBRL environments, which focus on providing additional textual information to measure agent understanding of sequential natural language information, VisualHandicaps seeks to improve the generalization ability of RL agents using varying details of maps and textual information, allowing for the study and demonstration of robust planning and self-localization. We provide automatically generated variations and difficulty levels in our environment and show that an agent using our systematic visual handicaps along with textual observation generally outperforms previous methods (that use only textual handicaps) in terms of success rate and the number of steps required to reach the goal. We also provide a detailed analysis of each handicap, which we believe to be important findings for driving future improvements in RL agents on text-based applications.