Publication
ACL 2023
Paper

MISMATCH: Fine-grained Evaluation of Machine-generated Text with Mismatch Error Types

Download paper

Abstract

With the growing interest in large language models, the need for evaluating the quality of machine text compared to reference (typically human-generated) text has become focal atten- tion. Most recent works focus either on task- specific evaluation metrics or study the proper- ties of machine-generated text captured by the existing metrics. In this work, we propose a new evaluation scheme to model human judg- ments in 7 NLP tasks, based on the fine-grained mismatches between a pair of texts. Inspired by the recent efforts in several NLP tasks for fine- grained evaluation, we introduce a set of 13 mis- match error types such as spatial/geographic errors, entity errors, etc, to guide the model for better prediction of human judgments. We propose a neural framework for evaluating ma- chine texts that uses these mismatch error types as auxiliary tasks and re-purposes the existing single-number evaluation metrics as additional scalar features, in addition to textual features extracted from the machine and reference texts. Our experiments reveal key insights about the existing metrics via the mismatch errors. We show that the mismatch errors between the sen- tence pairs on the held-out datasets from 7 NLP tasks align well with the human evaluation.