AAAI 2021
Conference paper

Multilingual Transfer Learning for QA using Translation as Data Augmentation

Download paper


Prior work on multilingual question answering has mostly fo- cused on using large multilingual pre-trained language models (LM) to perform zero-shot language-wise learning: train a QA model on English and test on other languages. In this work, we explore strategies that improve cross-lingual transfer by bringing the multilingual embeddings closer in the semantic space. Our first strategy augments the original English training data with machine translation-generated data. This results in a corpus of multilingual silver-labeled QA pairs that is 14 times larger than the original training set. In addition, we propose two novel strategies, language adversarial training and lan- guage arbitration framework, which significantly improve the (zero-resource) cross-lingual transfer performance and result in LM embeddings that are less language-variant. Empirically, we show that the proposed models outperform the previous zero-shot baseline on the recently introduced multilingual MLQA (Lewis et al. 2020) and TyDiQA (Clark et al. 2020) datasets.


02 Feb 2021


AAAI 2021