oLMpics-On What Language Model Pre-training Captures

Alon Talmor, Yanai Elazar, Yoav Goldberg, Jonathan Berant


Abstract
Recent success of pre-trained language models (LMs) has spurred widespread interest in the language capabilities that they possess. However, efforts to understand whether LM representations are useful for symbolic reasoning tasks have been limited and scattered. In this work, we propose eight reasoning tasks, which conceptually require operations such as comparison, conjunction, and composition. A fundamental challenge is to understand whether the performance of a LM on a task should be attributed to the pre-trained representations or to the process of fine-tuning on the task data. To address this, we propose an evaluation protocol that includes both zero-shot evaluation (no fine-tuning), as well as comparing the learning curve of a fine-tuned LM to the learning curve of multiple controls, which paints a rich picture of the LM capabilities. Our main findings are that: (a) different LMs exhibit qualitatively different reasoning abilities, e.g., RoBERTa succeeds in reasoning tasks where BERT fails completely; (b) LMs do not reason in an abstract manner and are context-dependent, e.g., while RoBERTa can compare ages, it can do so only when the ages are in the typical range of human ages; (c) On half of our reasoning tasks all models fail completely. Our findings and infrastructure can help future work on designing new datasets, models, and objective functions for pre-training.
Anthology ID:
2020.tacl-1.48
Volume:
Transactions of the Association for Computational Linguistics, Volume 8
Month:
Year:
2020
Address:
Cambridge, MA
Editors:
Mark Johnson, Brian Roark, Ani Nenkova
Venue:
TACL
SIG:
Publisher:
MIT Press
Note:
Pages:
743–758
Language:
URL:
https://aclanthology.org/2020.tacl-1.48
DOI:
10.1162/tacl_a_00342
Bibkey:
Cite (ACL):
Alon Talmor, Yanai Elazar, Yoav Goldberg, and Jonathan Berant. 2020. oLMpics-On What Language Model Pre-training Captures. Transactions of the Association for Computational Linguistics, 8:743–758.
Cite (Informal):
oLMpics-On What Language Model Pre-training Captures (Talmor et al., TACL 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.tacl-1.48.pdf