Publication
ACL-IJCNLP 2021
Workshop paper

IBM MNLP IE at CASE 2021 Task 2: NLI Reranking for Zero-Shot Text Classification

Download paper

Abstract

Supervised models can achieve very high accuracy for fine-grained text classification. In practice, however, training data may be abundant for some types but scarce or even non-existent for others. We propose a hybrid architecture that uses as much labeled data as available for fine-tuning classification models, while also allowing for types with little (few-shot) or no (zero-shot) labeled data. In particular, we pair a supervised text classification model with a Natural Language Inference (NLI) reranking model. The NLI reranker uses a textual representation of target types that allows it to score the strength with which a type is implied by a text, without requiring training data for the types. Experiments show that the NLI model is very sensitive to the choice of textual representation, but can be effective for classifying unseen types. It can also improve classification accuracy for the known types of an already highly accurate supervised model.

Date

01 Aug 2021

Publication

ACL-IJCNLP 2021

Authors

Tags

Resources

Share