Grammatical Error Correction with Neural Reinforcement Learning

Keisuke Sakaguchi, Matt Post, Benjamin Van Durme


Abstract
We propose a neural encoder-decoder model with reinforcement learning (NRL) for grammatical error correction (GEC). Unlike conventional maximum likelihood estimation (MLE), the model directly optimizes towards an objective that considers a sentence-level, task-specific evaluation metric, avoiding the exposure bias issue in MLE. We demonstrate that NRL outperforms MLE both in human and automated evaluation metrics, achieving the state-of-the-art on a fluency-oriented GEC corpus.
Anthology ID:
I17-2062
Volume:
Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers)
Month:
November
Year:
2017
Address:
Taipei, Taiwan
Editors:
Greg Kondrak, Taro Watanabe
Venue:
IJCNLP
SIG:
Publisher:
Asian Federation of Natural Language Processing
Note:
Pages:
366–372
Language:
URL:
https://aclanthology.org/I17-2062
DOI:
Bibkey:
Cite (ACL):
Keisuke Sakaguchi, Matt Post, and Benjamin Van Durme. 2017. Grammatical Error Correction with Neural Reinforcement Learning. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 366–372, Taipei, Taiwan. Asian Federation of Natural Language Processing.
Cite (Informal):
Grammatical Error Correction with Neural Reinforcement Learning (Sakaguchi et al., IJCNLP 2017)
Copy Citation:
PDF:
https://aclanthology.org/I17-2062.pdf
Data
FCEJFLEG