About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Publication
IJCAI 2022
Workshop paper
Boolean Decision Rules for Reinforcement Learning Policy Summarisation
Abstract
Explainability of Reinforcement Learning (RL) policies remains a challenging research problem, particularly when considering RL in a safety context. Understanding the decisions and intentions of an RL policy offer avenues to incorporate safety into the policy by limiting undesirable actions. We propose the use of a Boolean Decision Rules model to create a post-hoc rule-based summary of an agent’s policy. We evaluate our proposed approach using a DQN agent trained on an implementation of a lava gridworld and show that, given a hand-crafted feature representation of this grid- world, simple generalised rules can be created, giving a post-hoc explainable summary of the agent’s policy. We discuss possible avenues to introduce safety into a RL agent’s policy by using rules gen- erated by this rule-based model as constraints imposed on the agent’s policy, as well as discuss how creating simple rule summaries of an agent’s policy may help in the debugging process of RL agents.