Building Better Open-Source Tools to Support Fairness in Automated Scoring

Nitin Madnani, Anastassia Loukina, Alina von Davier, Jill Burstein, Aoife Cahill


Abstract
Automated scoring of written and spoken responses is an NLP application that can significantly impact lives especially when deployed as part of high-stakes tests such as the GRE® and the TOEFL®. Ethical considerations require that automated scoring algorithms treat all test-takers fairly. The educational measurement community has done significant research on fairness in assessments and automated scoring systems must incorporate their recommendations. The best way to do that is by making available automated, non-proprietary tools to NLP researchers that directly incorporate these recommendations and generate the analyses needed to help identify and resolve biases in their scoring systems. In this paper, we attempt to provide such a solution.
Anthology ID:
W17-1605
Volume:
Proceedings of the First ACL Workshop on Ethics in Natural Language Processing
Month:
April
Year:
2017
Address:
Valencia, Spain
Editors:
Dirk Hovy, Shannon Spruit, Margaret Mitchell, Emily M. Bender, Michael Strube, Hanna Wallach
Venue:
EthNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
41–52
Language:
URL:
https://aclanthology.org/W17-1605
DOI:
10.18653/v1/W17-1605
Bibkey:
Cite (ACL):
Nitin Madnani, Anastassia Loukina, Alina von Davier, Jill Burstein, and Aoife Cahill. 2017. Building Better Open-Source Tools to Support Fairness in Automated Scoring. In Proceedings of the First ACL Workshop on Ethics in Natural Language Processing, pages 41–52, Valencia, Spain. Association for Computational Linguistics.
Cite (Informal):
Building Better Open-Source Tools to Support Fairness in Automated Scoring (Madnani et al., EthNLP 2017)
Copy Citation:
PDF:
https://aclanthology.org/W17-1605.pdf