Automated Scoring: Beyond Natural Language Processing

Nitin Madnani, Aoife Cahill


Abstract
In this position paper, we argue that building operational automated scoring systems is a task that has disciplinary complexity above and beyond standard competitive shared tasks which usually involve applying the latest machine learning techniques to publicly available data in order to obtain the best accuracy. Automated scoring systems warrant significant cross-discipline collaboration of which natural language processing and machine learning are just two of many important components. Such systems have multiple stakeholders with different but valid perspectives that can often times be at odds with each other. Our position is that it is essential for us as NLP researchers to understand and incorporate these perspectives in our research and work towards a mutually satisfactory solution in order to build automated scoring systems that are accurate, fair, unbiased, and useful.
Anthology ID:
C18-1094
Volume:
Proceedings of the 27th International Conference on Computational Linguistics
Month:
August
Year:
2018
Address:
Santa Fe, New Mexico, USA
Editors:
Emily M. Bender, Leon Derczynski, Pierre Isabelle
Venue:
COLING
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1099–1109
Language:
URL:
https://aclanthology.org/C18-1094
DOI:
Bibkey:
Cite (ACL):
Nitin Madnani and Aoife Cahill. 2018. Automated Scoring: Beyond Natural Language Processing. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1099–1109, Santa Fe, New Mexico, USA. Association for Computational Linguistics.
Cite (Informal):
Automated Scoring: Beyond Natural Language Processing (Madnani & Cahill, COLING 2018)
Copy Citation:
PDF:
https://aclanthology.org/C18-1094.pdf