%0 Conference Proceedings %T Automated Essay Scoring in the Presence of Biased Ratings %A Amorim, Evelin %A Cançado, Marcia %A Veloso, Adriano %Y Walker, Marilyn %Y Ji, Heng %Y Stent, Amanda %S Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers) %D 2018 %8 June %I Association for Computational Linguistics %C New Orleans, Louisiana %F amorim-etal-2018-automated %X Studies in Social Sciences have revealed that when people evaluate someone else, their evaluations often reflect their biases. As a result, rater bias may introduce highly subjective factors that make their evaluations inaccurate. This may affect automated essay scoring models in many ways, as these models are typically designed to model (potentially biased) essay raters. While there is sizeable literature on rater effects in general settings, it remains unknown how rater bias affects automated essay scoring. To this end, we present a new annotated corpus containing essays and their respective scores. Different from existing corpora, our corpus also contains comments provided by the raters in order to ground their scores. We present features to quantify rater bias based on their comments, and we found that rater bias plays an important role in automated essay scoring. We investigated the extent to which rater bias affects models based on hand-crafted features. Finally, we propose to rectify the training set by removing essays associated with potentially biased scores while learning the scoring model. %R 10.18653/v1/N18-1021 %U https://aclanthology.org/N18-1021 %U https://doi.org/10.18653/v1/N18-1021 %P 229-237