Compression, Transduction, and Creation: A Unified Framework for Evaluating Natural Language Generation

Mingkai Deng, Bowen Tan, Zhengzhong Liu, Eric Xing, Zhiting Hu


Abstract
Natural language generation (NLG) spans a broad range of tasks, each of which serves for specific objectives and desires different properties of generated text. The complexity makes automatic evaluation of NLG particularly challenging. Previous work has typically focused on a single task and developed individual evaluation metrics based on specific intuitions. In this paper, we propose a unifying perspective based on the nature of information change in NLG tasks, including compression (e.g., summarization), transduction (e.g., text rewriting), and creation (e.g., dialog). _Information alignment_ between input, context, and output text plays a common central role in characterizing the generation. With automatic alignment prediction models, we develop a family of interpretable metrics that are suitable for evaluating key aspects of different NLG tasks, often without need of gold reference data. Experiments show the uniformly designed metrics achieve stronger or comparable correlations with human judgement compared to state-of-the-art metrics in each of diverse tasks, including text summarization, style transfer, and knowledge-grounded dialog.
Anthology ID:
2021.emnlp-main.599
Volume:
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2021
Address:
Online and Punta Cana, Dominican Republic
Editors:
Marie-Francine Moens, Xuanjing Huang, Lucia Specia, Scott Wen-tau Yih
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
7580–7605
Language:
URL:
https://aclanthology.org/2021.emnlp-main.599
DOI:
10.18653/v1/2021.emnlp-main.599
Bibkey:
Cite (ACL):
Mingkai Deng, Bowen Tan, Zhengzhong Liu, Eric Xing, and Zhiting Hu. 2021. Compression, Transduction, and Creation: A Unified Framework for Evaluating Natural Language Generation. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 7580–7605, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Cite (Informal):
Compression, Transduction, and Creation: A Unified Framework for Evaluating Natural Language Generation (Deng et al., EMNLP 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.emnlp-main.599.pdf
Software:
 2021.emnlp-main.599.Software.zip
Video:
 https://aclanthology.org/2021.emnlp-main.599.mp4
Code
 tanyuqian/ctc-gen-eval
Data
CNN/Daily MailSummEval