The Great Misalignment Problem in Human Evaluation of NLP Methods

Mika Hämäläinen, Khalid Alnajjar


Abstract
We outline the Great Misalignment Problem in natural language processing research, this means simply that the problem definition is not in line with the method proposed and the human evaluation is not in line with the definition nor the method. We study this misalignment problem by surveying 10 randomly sampled papers published in ACL 2020 that report results with human evaluation. Our results show that only one paper was fully in line in terms of problem definition, method and evaluation. Only two papers presented a human evaluation that was in line with what was modeled in the method. These results highlight that the Great Misalignment Problem is a major one and it affects the validity and reproducibility of results obtained by a human evaluation.
Anthology ID:
2021.humeval-1.8
Volume:
Proceedings of the Workshop on Human Evaluation of NLP Systems (HumEval)
Month:
April
Year:
2021
Address:
Online
Editors:
Anya Belz, Shubham Agarwal, Yvette Graham, Ehud Reiter, Anastasia Shimorina
Venue:
HumEval
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
69–74
Language:
URL:
https://aclanthology.org/2021.humeval-1.8
DOI:
Bibkey:
Cite (ACL):
Mika Hämäläinen and Khalid Alnajjar. 2021. The Great Misalignment Problem in Human Evaluation of NLP Methods. In Proceedings of the Workshop on Human Evaluation of NLP Systems (HumEval), pages 69–74, Online. Association for Computational Linguistics.
Cite (Informal):
The Great Misalignment Problem in Human Evaluation of NLP Methods (Hämäläinen & Alnajjar, HumEval 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.humeval-1.8.pdf