- 2022
- NeurIPS 2022

# Imprecise probabilistic logic

## Overview

### Neuro-symbolic AI

Neuro-symbolic AI or NSAI aims to bridge the gap between two of the most studied disciplines in AI -- principled, deductive reasoning of formal logic systems and data-driven neural network architectures -- in order to provide a robust AI capable of reasoning, learning and cognitive modelling. Both paradigms come with their own strengths and weaknesses. On one hand, formal logic is interpretable, verifiable and, in principle, can generalise to novel tasks. However it is computationally intensive if not intractable, requires extensive domain knowledge and it is quite brittle in face of even minor inconsistencies. Neural networks, on the other hand, perform well with noisy data, require little human input, and are much more efficient at runtime. However, they require enormous amounts of training data, are vulnerable to adversarial attacks and in general are very hard to interpret. Merging the two disciplines may exploit either's strengths while mitigating their weaknesses.

Our research efforts have been concentrated on developing novel neural architectures to facilitate learning and efficient logical reasoning (i.e., Logical Neural Networks), as well as developing a novel probabilistic logic framework that can represent and reason with imperfect or incomplete knowledge (i.e., Logical Credal Networks).

### Logical Credal Networks

Logical Credal Networks or LCNs are a recent probabilistic logic specifically designed for effective aggregation and reasoning over multiple sources of imprecise knowledge. An LCN expresses both probability bounds for propositional and first-order logic formulas with few restrictions and a Markov condition that is similar to Bayesian and Markov networks for capturing certain independence relations. Exact inference in LCNs involves the exact solution of a non-linear non-convex constraint program defined over an exponentially large number of non-negative real valued variables.

Approximate inference in LCNs can be done using a novel iterative message-passing algorithm called ARIEL. This approach is inspired by the classical belief propagation scheme for graphical models and propagates messages in an iterative manner between the nodes of a factor graph associated with the LCN. The key novelty of our scheme is that the messages contain both lower and upper bounds on the

marginal probability of LCN’s variables and they are tightened iteratively. Importantly, these messages solve considerably smaller local non-linear constraint programs as compared with those involved in exact inference.

Our results are quite promising and show that ARIEL is able to produce high quality solutions compared with the exact inference approach. Moreover, we show that ARIEL scales to much larger problems than previously considered while maintaining solution quality. This is important because it allows us to tackle practical problems, especially first-order logic LCNs with large domains whose groundings could translate to many hundreds of variables.

Potential future directions include extending to temporal models, further algorithmic innovations for learning LCNs from data and experiments on a wider array of applications.

### Probabilistic Logical Neural Networks

Logical Neural Networks or LNNs are simultaneously capable of both neural network-style learning and classical AI-style reasoning. The LNN is a new neural network architecture with a 1-to-1 correspondence to a system of logical formulae, in which neurons model a rigorously defined notion of weighted real-valued or classical first-order logic. LNNs allow training while preserving the classical or real-valued nature of its logical gates by enforcing certain logical constraints on neural weights during training. They perform bidirectional inference, propagating truth values from each formula's atoms to its root and vice versa, thus modelling classical inference rules like modus ponens among others. Maintaining both lower and upper bounds on truth values at each of its neurons, an LNN allows the open-world assumption that some logical statements may be true even if their truth value is not known or provable. LNN can naturally be integrated with classical NN architectures thus facilitating the integration of expert domain knowledge.

Probabilistic Logical Neural Networks or PLNNs is a novel probabilistic extension of LNNs that aims at supporting probabilistic reasoning tasks such as computing conditional probabilities of certain neurons given observations on other neurons or parts of the network. In addition, our research efforts are also centred around developing novel lifted inference algorithms for both LNN and PLNNs thus allowing to scale to large first-order logic knowledge bases.