About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Publication
AAAI 2025
Poster
Neural Reasoning Networks: Efficient interpretable neural networks with automatic textual explanations
Abstract
In this paper, we present a novel neuro-symbolic AI architecture, Neural Reasoning Networks (NRN), that is scalable, and generates interpretable logical reasoning at both a global and sample level. NRNs use connected layers of logical neurons which implement a form of Lukasiewicz logic. A combined gradient descent and bandit-based training procedure jointly optimizes both the structure and weights of the network and is implemented as an extension to Pytorch that takes full advantage of GPU scaling and batched training. Evaluation on a diverse set of open-source datasets for tabular learning demonstrates performance that exceeds traditional deep learning (DL) and is on par with state-of-the-art classical machine learning (ML) tree-based approaches, while training faster compared with other recent methods. Furthermore, NRN is the only method to meet all three challenges for interpretable algorithms introduced by Rudin (2019), namely 1) \textit{logical conditions}, 2) \textit{linear modeling}, and 3) \textit{case-based reasoning}. Our approach thus provides a strong solution to overcome the interpretability-performance trade-off.