Extremely Small BERT Models from Mixed-Vocabulary Training

Sanqiang Zhao, Raghav Gupta, Yang Song, Denny Zhou


Abstract
Pretrained language models like BERT have achieved good results on NLP tasks, but are impractical on resource-limited devices due to memory footprint. A large fraction of this footprint comes from the input embeddings with large input vocabulary and embedding dimensions. Existing knowledge distillation methods used for model compression cannot be directly applied to train student models with reduced vocabulary sizes. To this end, we propose a distillation method to align the teacher and student embeddings via mixed-vocabulary training. Our method compresses BERT-LARGE to a task-agnostic model with smaller vocabulary and hidden dimensions, which is an order of magnitude smaller than other distilled BERT models and offers a better size-accuracy trade-off on language understanding benchmarks as well as a practical dialogue task.
Anthology ID:
2021.eacl-main.238
Volume:
Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume
Month:
April
Year:
2021
Address:
Online
Editors:
Paola Merlo, Jorg Tiedemann, Reut Tsarfaty
Venue:
EACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2753–2759
Language:
URL:
https://aclanthology.org/2021.eacl-main.238
DOI:
10.18653/v1/2021.eacl-main.238
Bibkey:
Cite (ACL):
Sanqiang Zhao, Raghav Gupta, Yang Song, and Denny Zhou. 2021. Extremely Small BERT Models from Mixed-Vocabulary Training. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 2753–2759, Online. Association for Computational Linguistics.
Cite (Informal):
Extremely Small BERT Models from Mixed-Vocabulary Training (Zhao et al., EACL 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.eacl-main.238.pdf
Data
GLUEMRPCMultiNLISSTSST-2