What Makes My Model Perplexed? A Linguistic Investigation on Neural Language Models Perplexity

Alessio Miaschi, Dominique Brunato, Felice Dell’Orletta, Giulia Venturi


Abstract
This paper presents an investigation aimed at studying how the linguistic structure of a sentence affects the perplexity of two of the most popular Neural Language Models (NLMs), BERT and GPT-2. We first compare the sentence-level likelihood computed with BERT and the GPT-2’s perplexity showing that the two metrics are correlated. In addition, we exploit linguistic features capturing a wide set of morpho-syntactic and syntactic phenomena showing how they contribute to predict the perplexity of the two NLMs.
Anthology ID:
2021.deelio-1.5
Volume:
Proceedings of Deep Learning Inside Out (DeeLIO): The 2nd Workshop on Knowledge Extraction and Integration for Deep Learning Architectures
Month:
June
Year:
2021
Address:
Online
Editors:
Eneko Agirre, Marianna Apidianaki, Ivan Vulić
Venue:
DeeLIO
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
40–47
Language:
URL:
https://aclanthology.org/2021.deelio-1.5
DOI:
10.18653/v1/2021.deelio-1.5
Bibkey:
Cite (ACL):
Alessio Miaschi, Dominique Brunato, Felice Dell’Orletta, and Giulia Venturi. 2021. What Makes My Model Perplexed? A Linguistic Investigation on Neural Language Models Perplexity. In Proceedings of Deep Learning Inside Out (DeeLIO): The 2nd Workshop on Knowledge Extraction and Integration for Deep Learning Architectures, pages 40–47, Online. Association for Computational Linguistics.
Cite (Informal):
What Makes My Model Perplexed? A Linguistic Investigation on Neural Language Models Perplexity (Miaschi et al., DeeLIO 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.deelio-1.5.pdf
Data
Universal Dependencies