Small but Mighty: New Benchmarks for Split and Rephrase

Li Zhang, Huaiyu Zhu, Siddhartha Brahma, Yunyao Li


Abstract
Split and Rephrase is a text simplification task of rewriting a complex sentence into simpler ones. As a relatively new task, it is paramount to ensure the soundness of its evaluation benchmark and metric. We find that the widely used benchmark dataset universally contains easily exploitable syntactic cues caused by its automatic generation process. Taking advantage of such cues, we show that even a simple rule-based model can perform on par with the state-of-the-art model. To remedy such limitations, we collect and release two crowdsourced benchmark datasets. We not only make sure that they contain significantly more diverse syntax, but also carefully control for their quality according to a well-defined set of criteria. While no satisfactory automatic metric exists, we apply fine-grained manual evaluation based on these criteria using crowdsourcing, showing that our datasets better represent the task and are significantly more challenging for the models.
Anthology ID:
2020.emnlp-main.91
Volume:
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)
Month:
November
Year:
2020
Address:
Online
Editors:
Bonnie Webber, Trevor Cohn, Yulan He, Yang Liu
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1198–1205
Language:
URL:
https://aclanthology.org/2020.emnlp-main.91
DOI:
10.18653/v1/2020.emnlp-main.91
Bibkey:
Cite (ACL):
Li Zhang, Huaiyu Zhu, Siddhartha Brahma, and Yunyao Li. 2020. Small but Mighty: New Benchmarks for Split and Rephrase. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1198–1205, Online. Association for Computational Linguistics.
Cite (Informal):
Small but Mighty: New Benchmarks for Split and Rephrase (Zhang et al., EMNLP 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.emnlp-main.91.pdf
Video:
 https://slideslive.com/38938665
Data
WikiSplit