PrimeQA.png

PrimeQA

An efficient toolkit for state-of-the-art models on multi lingual open domain question answering.

Overview

Question Answering (QA) systems represent a step beyond Information retrieval (IR) systems: while IR systems return a document, the addition of an Machine Reading Comprehension (MRC) component enables the system to return an exact answer span.The former involves, given a question (or a keyword query) and a large document collection, retrieving the relevant passages/snippets of text or even documents that potentially answer the question. Machine Reading Comprehension (MRC) involves “extracting” the answer spans (typically a very short precise piece of text) from the relevant passages to the asked question. In present literature, this entire pipeline is called “Open Retrieval QA” with the former technique (IR) being referred to as a “retriever” and the latter being a “reader”. Here’s an example of a multilingual ORQA setting: a system is asked to answer a question asked in Japanese and retrieve & read the relevant articles in an English Wikipedia collection and answer it back.

primeqa1.png

Open Retrieval QA is moving fast with the launch of state-of-the-art (SOTA) retrievers, readers and multi-modal QA models. However, there are two key hurdles which slow the adoption of the SOTA models by the community, which are (1) reproducibility and (2) ease of use.

PrimeQA addresses both issues. It is a one stop comprehensive retriever and reader toolkit which has:

  1. Simple python code to train and perform inference for QA problems

  2. Best at multiple leaderboards (TyDI, XOR-TyDI) and multi-modal benchmarks e.g. HybridQA

  3. Trained models on the HuggingFace model hub.

  4. Services via docker hub that can be used easily for building your own QA search engine.

  5. Big code contribution base from Stanford, UIUC, OSU, CMU, Sea AI lab, etc.

Let’s get into few examples to show really how easy it is to use PrimeQA.

One can use the SOTA reader which won the TyDI leaderboard with only few lines of code as shown below:

primeqa2.png

This will yield the result as follows:

primeqa3.png

Within PrimeQA, of course, we also have the mechanism to train and fine-tune these individual components. Read the full documentation here.

PrimeQA supports multi-modal QA also, for example you can PrimeQA to do table question answering as shown below:

primeqa4.png

which will produce

primeqa5.png

PrimeQA also has the support for domain adaptation with the capability of generating “synthetic” questions given the target domain’s documents. The question generation capability is available over tables, text and hybrid context as well and can be used via only a few lines of code again. Checkout PrimeQA notebooks for easy usage examples across different capabilities.

PrimeQA also has many other capabilities available and coming soon, check out the recent updates and full details here.

Technical Resources

Main github

Services github

Model hub