Auxiliary Objectives for Neural Error Detection Models

Marek Rei, Helen Yannakoudakis


Abstract
We investigate the utility of different auxiliary objectives and training strategies within a neural sequence labeling approach to error detection in learner writing. Auxiliary costs provide the model with additional linguistic information, allowing it to learn general-purpose compositional features that can then be exploited for other objectives. Our experiments show that a joint learning approach trained with parallel labels on in-domain data improves performance over the previous best error detection system. While the resulting model has the same number of parameters, the additional objectives allow it to be optimised more efficiently and achieve better performance.
Anthology ID:
W17-5004
Volume:
Proceedings of the 12th Workshop on Innovative Use of NLP for Building Educational Applications
Month:
September
Year:
2017
Address:
Copenhagen, Denmark
Editors:
Joel Tetreault, Jill Burstein, Claudia Leacock, Helen Yannakoudakis
Venue:
BEA
SIG:
SIGEDU
Publisher:
Association for Computational Linguistics
Note:
Pages:
33–43
Language:
URL:
https://aclanthology.org/W17-5004
DOI:
10.18653/v1/W17-5004
Bibkey:
Cite (ACL):
Marek Rei and Helen Yannakoudakis. 2017. Auxiliary Objectives for Neural Error Detection Models. In Proceedings of the 12th Workshop on Innovative Use of NLP for Building Educational Applications, pages 33–43, Copenhagen, Denmark. Association for Computational Linguistics.
Cite (Informal):
Auxiliary Objectives for Neural Error Detection Models (Rei & Yannakoudakis, BEA 2017)
Copy Citation:
PDF:
https://aclanthology.org/W17-5004.pdf
Data
CoNLLCoNLL 2003FCE