%0 Conference Proceedings %T HellaSwag: Can a Machine Really Finish Your Sentence? %A Zellers, Rowan %A Holtzman, Ari %A Bisk, Yonatan %A Farhadi, Ali %A Choi, Yejin %Y Korhonen, Anna %Y Traum, David %Y Màrquez, Lluís %S Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics %D 2019 %8 July %I Association for Computational Linguistics %C Florence, Italy %F zellers-etal-2019-hellaswag %X Recent work by Zellers et al. (2018) introduced a new task of commonsense natural language inference: given an event description such as “A woman sits at a piano,” a machine must select the most likely followup: “She sets her fingers on the keys.” With the introduction of BERT, near human-level performance was reached. Does this mean that machines can perform human level commonsense inference? In this paper, we show that commonsense inference still proves difficult for even state-of-the-art models, by presenting HellaSwag, a new challenge dataset. Though its questions are trivial for humans (\textgreater95% accuracy), state-of-the-art models struggle (\textless48%). We achieve this via Adversarial Filtering (AF), a data collection paradigm wherein a series of discriminators iteratively select an adversarial set of machine-generated wrong answers. AF proves to be surprisingly robust. The key insight is to scale up the length and complexity of the dataset examples towards a critical ‘Goldilocks’ zone wherein generated text is ridiculous to humans, yet often misclassified by state-of-the-art models. Our construction of HellaSwag, and its resulting difficulty, sheds light on the inner workings of deep pretrained models. More broadly, it suggests a new path forward for NLP research, in which benchmarks co-evolve with the evolving state-of-the-art in an adversarial way, so as to present ever-harder challenges. %R 10.18653/v1/P19-1472 %U https://aclanthology.org/P19-1472 %U https://doi.org/10.18653/v1/P19-1472 %P 4791-4800