How Good (really) are Grammatical Error Correction Systems?

Alla Rozovskaya, Dan Roth


Abstract
Standard evaluations of Grammatical Error Correction (GEC) systems make use of a fixed reference text generated relative to the original text; they show, even when using multiple references, that we have a long way to go. This analysis paper studies the performance of GEC systems relative to closest-gold – a gold reference text created relative to the output of a system. Surprisingly, we show that the real performance is 20-40 points better than standard evaluations show. Moreover, the performance remains high even when considering any of the top-10 hypotheses produced by a system. Importantly, the type of mistakes corrected by lower-ranked hypotheses differs in interesting ways from the top one, providing an opportunity to focus on a range of errors – local spelling and grammar edits vs. more complex lexical improvements. Our study shows these results in English and Russian, and thus provides a preliminary proposal for a more realistic evaluation of GEC systems.
Anthology ID:
2021.eacl-main.231
Volume:
Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume
Month:
April
Year:
2021
Address:
Online
Editors:
Paola Merlo, Jorg Tiedemann, Reut Tsarfaty
Venue:
EACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2686–2698
Language:
URL:
https://aclanthology.org/2021.eacl-main.231
DOI:
10.18653/v1/2021.eacl-main.231
Bibkey:
Cite (ACL):
Alla Rozovskaya and Dan Roth. 2021. How Good (really) are Grammatical Error Correction Systems?. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 2686–2698, Online. Association for Computational Linguistics.
Cite (Informal):
How Good (really) are Grammatical Error Correction Systems? (Rozovskaya & Roth, EACL 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.eacl-main.231.pdf