Pre-training Is (Almost) All You Need: An Application to Commonsense Reasoning

Alexandre Tamborrino, Nicola Pellicanò, Baptiste Pannier, Pascal Voitot, Louise Naudin


Abstract
Fine-tuning of pre-trained transformer models has become the standard approach for solving common NLP tasks. Most of the existing approaches rely on a randomly initialized classifier on top of such networks. We argue that this fine-tuning procedure is sub-optimal as the pre-trained model has no prior on the specific classifier labels, while it might have already learned an intrinsic textual representation of the task. In this paper, we introduce a new scoring method that casts a plausibility ranking task in a full-text format and leverages the masked language modeling head tuned during the pre-training phase. We study commonsense reasoning tasks where the model must rank a set of hypotheses given a premise, focusing on the COPA, Swag, HellaSwag and CommonsenseQA datasets. By exploiting our scoring method without fine-tuning, we are able to produce strong baselines (e.g. 80% test accuracy on COPA) that are comparable to supervised approaches. Moreover, when fine-tuning directly on the proposed scoring function, we show that our method provides a much more stable training phase across random restarts (e.g x10 standard deviation reduction on COPA test accuracy) and requires less annotated data than the standard classifier approach to reach equivalent performances.
Anthology ID:
2020.acl-main.357
Volume:
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Month:
July
Year:
2020
Address:
Online
Editors:
Dan Jurafsky, Joyce Chai, Natalie Schluter, Joel Tetreault
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3878–3887
Language:
URL:
https://aclanthology.org/2020.acl-main.357
DOI:
10.18653/v1/2020.acl-main.357
Bibkey:
Cite (ACL):
Alexandre Tamborrino, Nicola Pellicanò, Baptiste Pannier, Pascal Voitot, and Louise Naudin. 2020. Pre-training Is (Almost) All You Need: An Application to Commonsense Reasoning. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3878–3887, Online. Association for Computational Linguistics.
Cite (Informal):
Pre-training Is (Almost) All You Need: An Application to Commonsense Reasoning (Tamborrino et al., ACL 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.acl-main.357.pdf
Video:
 http://slideslive.com/38929054
Data
COPACommonsenseQAHellaSwagSWAGWSC