The Effect of Adding Authorship Knowledge in Automated Text Scoring

Meng Zhang, Xie Chen, Ronan Cummins, Øistein E. Andersen, Ted Briscoe


Abstract
Some language exams have multiple writing tasks. When a learner writes multiple texts in a language exam, it is not surprising that the quality of these texts tends to be similar, and the existing automated text scoring (ATS) systems do not explicitly model this similarity. In this paper, we suggest that it could be useful to include the other texts written by this learner in the same exam as extra references in an ATS system. We propose various approaches of fusing information from multiple tasks and pass this authorship knowledge into our ATS model on six different datasets. We show that this can positively affect the model performance at a global level.
Anthology ID:
W18-0536
Volume:
Proceedings of the Thirteenth Workshop on Innovative Use of NLP for Building Educational Applications
Month:
June
Year:
2018
Address:
New Orleans, Louisiana
Editors:
Joel Tetreault, Jill Burstein, Ekaterina Kochmar, Claudia Leacock, Helen Yannakoudakis
Venue:
BEA
SIG:
SIGEDU
Publisher:
Association for Computational Linguistics
Note:
Pages:
305–314
Language:
URL:
https://aclanthology.org/W18-0536
DOI:
10.18653/v1/W18-0536
Bibkey:
Cite (ACL):
Meng Zhang, Xie Chen, Ronan Cummins, Øistein E. Andersen, and Ted Briscoe. 2018. The Effect of Adding Authorship Knowledge in Automated Text Scoring. In Proceedings of the Thirteenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 305–314, New Orleans, Louisiana. Association for Computational Linguistics.
Cite (Informal):
The Effect of Adding Authorship Knowledge in Automated Text Scoring (Zhang et al., BEA 2018)
Copy Citation:
PDF:
https://aclanthology.org/W18-0536.pdf
Data
FCE