IJCAI 2023
Workshop paper

Explainable Neuro-Symbolic Reinforcement Learning

View publication


The transparency of trained machine learning models is one of the important factors. Recently, neuro-symbolic methods have been proposed to increase transparency. Although these approaches can show trained rules in neuro-symbolic neural networks, it is difficult to understand the reasons for the AI's decision, and a human operator cannot edit the trained knowledge due to the lack of a graphical visualizer and an easy-to-use interface. We propose essential components for an explainability demonstration inspired by psychological studies. We then develop a graphical interface for neuro-symbolic reinforcement learning. We conduct experiments for editing networks to understand the advantage of the proposed interface and a questionnaire survey for our developed system.