To Pretrain or Not to Pretrain: Examining the Benefits of Pretrainng on Resource Rich Tasks

Sinong Wang, Madian Khabsa, Hao Ma


Abstract
Pretraining NLP models with variants of Masked Language Model (MLM) objectives has recently led to a significant improvements on many tasks. This paper examines the benefits of pretrained models as a function of the number of training samples used in the downstream task. On several text classification tasks, we show that as the number of training examples grow into the millions, the accuracy gap between finetuning BERT-based model and training vanilla LSTM from scratch narrows to within 1%. Our findings indicate that MLM-based models might reach a diminishing return point as the supervised data size increases significantly.
Anthology ID:
2020.acl-main.200
Volume:
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Month:
July
Year:
2020
Address:
Online
Editors:
Dan Jurafsky, Joyce Chai, Natalie Schluter, Joel Tetreault
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2209–2213
Language:
URL:
https://aclanthology.org/2020.acl-main.200
DOI:
10.18653/v1/2020.acl-main.200
Bibkey:
Cite (ACL):
Sinong Wang, Madian Khabsa, and Hao Ma. 2020. To Pretrain or Not to Pretrain: Examining the Benefits of Pretrainng on Resource Rich Tasks. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2209–2213, Online. Association for Computational Linguistics.
Cite (Informal):
To Pretrain or Not to Pretrain: Examining the Benefits of Pretrainng on Resource Rich Tasks (Wang et al., ACL 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.acl-main.200.pdf
Video:
 http://slideslive.com/38929086