What’s so special about BERT’s layers? A closer look at the NLP pipeline in monolingual and multilingual models

Wietse de Vries, Andreas van Cranenburgh, Malvina Nissim


Abstract
Peeking into the inner workings of BERT has shown that its layers resemble the classical NLP pipeline, with progressively more complex tasks being concentrated in later layers. To investigate to what extent these results also hold for a language other than English, we probe a Dutch BERT-based model and the multilingual BERT model for Dutch NLP tasks. In addition, through a deeper analysis of part-of-speech tagging, we show that also within a given task, information is spread over different parts of the network and the pipeline might not be as neat as it seems. Each layer has different specialisations, so that it may be more useful to combine information from different layers, instead of selecting a single one based on the best overall performance.
Anthology ID:
2020.findings-emnlp.389
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2020
Month:
November
Year:
2020
Address:
Online
Editors:
Trevor Cohn, Yulan He, Yang Liu
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4339–4350
Language:
URL:
https://aclanthology.org/2020.findings-emnlp.389
DOI:
10.18653/v1/2020.findings-emnlp.389
Bibkey:
Cite (ACL):
Wietse de Vries, Andreas van Cranenburgh, and Malvina Nissim. 2020. What’s so special about BERT’s layers? A closer look at the NLP pipeline in monolingual and multilingual models. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 4339–4350, Online. Association for Computational Linguistics.
Cite (Informal):
What’s so special about BERT’s layers? A closer look at the NLP pipeline in monolingual and multilingual models (de Vries et al., Findings 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.findings-emnlp.389.pdf
Code
 wietsedv/bertje +  additional community code
Data
CoNLL 2002