- Songtao Lu
- Naweed Khan
- et al.
- 2021
- ICASSP 2021
Generalizable and transferable reasoning mechanisms are an important capability toward achieving artificial general intelligence (AGI). The aim of this project is to research and develop logical neural networks (or LNNs), a neuro-symbolic AI framework, and use the findings to demonstrate the potential benefits of incorporating knowledge and reasoning into neural network learning.
In an LNN, every neuron has a meaning as a component of a formula in a weighted real-valued logic, yielding a highly interpretable disentangled representation. Inference is omnidirectional, rather than focused on predefined target variables, and corresponds to logical reasoning, including classical first-order logic theorem proving as a special case. The model is end-to-end differentiable, and learning minimizes a novel loss function capturing logical contradiction, yielding resilience to inconsistent knowledge. It also enables the open-world assumption by maintaining bounds on truth values which can have probabilistic semantics, yielding resilience to incomplete knowledge.