13 Nov 2020
Research
6 minute read

DualTKB: A Dual Learning Bridge between Text and Knowledge Base

Capturing and structuring common knowledge from the real world to make it available to computer systems is one of the foundational principles of IBM Research . The real-world information is often naturally organized as graphs (e.g., world wide web, social networks) where knowledge is represented not only by the data content of each node, but also by the manner these nodes connect to each other. For example, the information in the previous sentence could be represented as the following graph:

The real-world information is often naturally organized as graphs (e.g., world wide web, social networks) where knowledge is represented not only by the data content of each node, but also by the manner these nodes connect to each other.The real-world information is often naturally organized as graphs (e.g., world wide web, social networks) where knowledge is represented not only by the data content of each node, but also by the manner these nodes connect to each other.

Graph representation of knowledge is a powerful tool to capture information around us and enables the creation of Knowledge Bases (KBs) encompassing information as Knowledge Graphs (KGs) so computer systems can efficiently learn from.

Being able to process information embedded within knowledge graphs natively is part of IBM Research effort to create the foundations of Trustworthy AI ,1 where we build and enable AI solutions people can trust.

IBM Research is particularly interested in the task of knowledge transfer from a Knowledge Graph to a more accessible, human-readable modality, such as text. Text is a natural medium for us humans to acquire knowledge by learning new facts, new concepts, new ideas.

IBM Research Introduces DualTKB at EMNLP 2020

In new work being presented at EMNLP’20,2 a team of IBM researchers explored how to create a learning bridge between text and knowledge bases by enabling a fluid transfer of information from one modality to another.3

DualPic2.pngThe approach is bi-directional, allowing for a dual learning framework that can generate text from KB and KB from text.

A version of our EMNLP’20 paper can be found on arXiv. The team is composed of Pierre Dognin, Igor Melnyk, Inkit Padhi, Cicero Nogueira dos Santos (now at AWS AI), and Payel Das.

Approach to DualTKB

DualTKB is a novel approach to define a dual learning bridge between text and Knowledge Base. DualTKB can ingest sentences and generate corresponding paths in a KB (to tackle this challenging task, we first describe the case of generating one path from/to one sentence).

Since we designed DualTKB to be bi-directional, our approach has the ability to translate (or transfer) to and from modalities natively. Translation cycles can therefore be defined to enforce consistency in generation. For instance, DualTKB can transfer a sentence to the KB domain, take this generated path and translate it back to the text domain. This is key to the dual learning process where translation cycles must return to the original domain with semantic consistency (in our example, we should translate back to a sentence that is either the original sentence, or a sentence semantically very close to it).

These consistency translation cycles were originally motivated by the lack of parallel data for cross-domain generation tasks. By relying on these transfer cycles, our approach handles unsupervised settings natively. In the diagram below, we give examples of all the translation cycles that DualTKB can handle. For instance, from the Text domain, we can transfer to the KB domain, using TAB, and then translate-back to text by using TABA. We can also do a same-domain transfer by using TAA.

from the Text domain, we can transfer to the KB domain, using TAB,  and then translate-back to text by using TABA. We can also do a same-domain transfer by using TAA.From the Text domain, we can transfer to the KB domain, using TABA, and then translate-back to text by using TABA. We can also do a same-domain transfer by using TAA.

IBM Research’s Model

The proposed model follows an encoder-decoder architecture, where the encoder projects text xA or path in a graph xB to a common high dimensional representation. It is then passed through a specialized decoder (Decoder A or Decoder B) which can either do a reconstruction of the same modality (auto-encoding) xAA or xBB or do the transfer to a different modality xAB or xBA.

The proposed model follows an encoder-decoder architecture, where the encoder projects text xA or path in a graph xB to a common high dimensional representation.The proposed model follows an encoder-decoder architecture, where the encoder projects text xA or path in a graph xB to a common high dimensional representation.

Traditionally, such systems are trained through a seq2seq (sequence to sequence) supervised learning where paired text-path data is used to enable same- and cross-modality generation. Unfortunately, many of the real-world KB datasets do not have the correspondent pairs of text and path, requiring unsupervised training techniques.

For this purpose we propose to augment the traditional supervised training, when parallel data is present, with unsupervised training when no paired data is available. The overall training process is shown in this diagramFor this purpose we propose to augment the traditional supervised training, when parallel data is present, with unsupervised training when no paired data is available. The overall training process is shown in this diagram.

The training process can be decomposed into a translation stage (LREC, LSUP) followed by a back-translation stage (LBT). Inherently, the model can natively deal with both supervised and unsupervised data. A more detailed description on the training method can be found in the paper.

IBM Research’s Dataset

Given the lack of supervised dataset for the task of KB-Text cross-domain translation, one of our contributions is the creation of a dataset based on the widely used ConceptNet KB and Open Mind Common Sense (OMCS) list of commonsense fact sentences. Since ConceptNet was derived from OMCS, we decided to employ fuzzy matching techniques to create a weakly supervised dataset by mapping ConceptNet to OMCS. This resulted in a parallel dataset of 100K edges for 250K sentences. We are planning to release details and describe the heuristics involved in creating this dataset by fuzzy matching. In the meantime, some information can be found on our website https://github.com/IBM/dualtkb , and in our paper.

Experiments

We compared the performance of DualTKB to published prior-work baselines for the task of link prediction.The table below shows results of MRR and HITS, both well-established metrics for evaluating the quality of link completion.4 DualTKB compares favorably with the competitors, enabling accurate link completion.

MRRHITS@1HITS@3HITS@10
DISTMULT [1]8.974.519.7617.44
COMPLEX [2]11.407.4212.4519.01
CONVE [3]20.8813.9722.9134.02
CONVTRANSE [4]18.687.8723.8738.95
S+G+B+C [5]51.1139.4259.5873.59
DualTKBGRU-GRU63.1055.3869.7574.58

For a more qualitative evaluation of cross-domain generation, we provide examples:

transfer from multiple sentences to paths (composing a graph) on the left, and on the right the reverse operation of sentence generation from a list of given paths.Transfer from multiple sentences to paths (composing a graph) on the left, and on the right the reverse operation of sentence generation from a list of given paths.

Future work

Our current research opens the door to multiple exciting future directions:

  1. A natural continuation of our work, which now deals with the transfer between a single sentence to/from a single path, is its extension to the generation of large multi-path graph structures given short paragraphs of sentences, or the reverse problem of KB conversion to a coherent textual descriptions.5
  2. Another direction of investigation is the development of class conditional generation of large text conditioned on the facts from a Knowledge Base.
  3. In the field of Trusted AI, we are also interested in extending this work on trusted generation of text and factual check of text given a Knowledge Base.

A version of our EMNLP’20 paper can be found online at https://arxiv.org/abs/2010.14660

IBM Researchers involved with this work are Pierre Dognin, Igor Melnyk, Inkit Padhi, Cicero Nogueira dos Santos (now at AWS AI), and Payel Das.

References

  1. Dettmers, T., Minervini, P., Stenetorp, P. & Riedel, S. Convolutional 2D Knowledge Graph Embeddings. AAAI 32, (2018).

  2. Yang, B., Yih, W., He, X., Gao, J. & Deng, L. Embedding Entities and Relations for Learning and Inference in Knowledge Bases. In Proceedings of the International Conference on Learning Representations (ICLR).

  3. Trouillon, T., Welbl, J., Riedel, S., Gaussier, E. & Bouchard, G. Complex Embeddings for Simple Link Prediction. in International Conference on Machine Learning 2071–2080 (PMLR, 2016).

  4. Malaviya, C., Bhagavatula, C., Bosselut, A. & Choi, Y. Commonsense Knowledge Base Completion with Structural and Semantic Context. AAAI 34, 2925–2933 (2020).

  5. Shang, C. et al. End-to-End Structure-Aware Convolutional Networks for Knowledge Base Completion. AAAI 33, 3060–3067 (2019).