Mxgra at SemEval-2020 Task 4: Common Sense Making with Next Token Prediction

Kris Collins, Max Grathwohl, Heba Ahmed


Abstract
In this paper, we explore solutions to a common sense making task in which a model must discern which of two sentences is against common sense. We used a pre-trained language model which we used to calculate complexity scores for input to discern which sentence contained an unlikely sequence of tokens. Other approaches we tested were word vector distances, which were used to find semantic outliers within a sentence, and siamese network. By using the pre-trained language model to calculate perplexity scores based on the sequence of tokens in input sentences, we achieved an accuracy of 75 percent.
Anthology ID:
2020.semeval-1.71
Volume:
Proceedings of the Fourteenth Workshop on Semantic Evaluation
Month:
December
Year:
2020
Address:
Barcelona (online)
Editors:
Aurelie Herbelot, Xiaodan Zhu, Alexis Palmer, Nathan Schneider, Jonathan May, Ekaterina Shutova
Venue:
SemEval
SIG:
SIGLEX
Publisher:
International Committee for Computational Linguistics
Note:
Pages:
569–573
Language:
URL:
https://aclanthology.org/2020.semeval-1.71
DOI:
10.18653/v1/2020.semeval-1.71
Bibkey:
Cite (ACL):
Kris Collins, Max Grathwohl, and Heba Ahmed. 2020. Mxgra at SemEval-2020 Task 4: Common Sense Making with Next Token Prediction. In Proceedings of the Fourteenth Workshop on Semantic Evaluation, pages 569–573, Barcelona (online). International Committee for Computational Linguistics.
Cite (Informal):
Mxgra at SemEval-2020 Task 4: Common Sense Making with Next Token Prediction (Collins et al., SemEval 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.semeval-1.71.pdf