Ewan Nurse, Benjamin Scott Mashford, et al.
CF 2016
The creation of large-scale open domain reading comprehension data sets in recent years has enabled the development of end-to-end neural comprehension models with promising results. To use these models for domains with limited training data, one of the most effective approach is to first pre-train them on large out-of-domain source data and then fine-tune them with the limited target data. The caveat of this is that after fine-tuning the comprehension models tend to perform poorly in the source domain, a phenomenon known as catastrophic forgetting. In this paper, we explore methods that reduce catastrophic forgetting during fine-tuning without assuming access to data from the source domain. We introduce new auxiliary penalty terms and observe the best performance when a combination of auxiliary penalty terms is used to regularise the fine-tuning process for adapting comprehension models. To test our methods, we develop and release 6 narrow domain data sets that can potentially be used as reading comprehension benchmarks.
Ewan Nurse, Benjamin Scott Mashford, et al.
CF 2016
Guang Xu, Xu Zhong
Remote Sensing Letters
Victor Henrique Alves Ribeiro, Pedro Henrique Domingues, et al.
IJCNN 2020
Jianbin Tang, Benjamin Scott Mashford, et al.
IEEE GRSL