Machine Learning applications are emerging as a useful tool for decision making. However, there exist difficulties in interpreting and explaining the decisions, which is a needed feature for human users. In this paper, we consider a problem of interpretability of the decisions made in sequential decision making problems which frequently addressed by Markov Decision Processes (MDPs) or Reinforcement Learning (RL) approaches. We distinguish between two types of interpretability: (i) dimensionality reduction which targets technical experts such as optimization experts and data scientists and (ii) interpretability for business users (e.g., customers). In this work, we utilise a neuro-symbolic framework called Logical Neural Network (LNN), which offers an integration of data-driven neural learning and symbolic representation. For the multi-echelon supply chain use case we show how LNN is used to help technical expert to make a decision what state variables should remain in the problem state description to be solved by a classical MDP dual linear programming (LP) approach. Later we show how data set, generated by applying MDP policy (using gym environment), was used by LNN to generate rules which are more tractable than a classical MDP policy.