Analysing Neural Language Models: Contextual Decomposition Reveals Default Reasoning in Number and Gender Assignment

Jaap Jumelet, Willem Zuidema, Dieuwke Hupkes


Abstract
Extensive research has recently shown that recurrent neural language models are able to process a wide range of grammatical phenomena. How these models are able to perform these remarkable feats so well, however, is still an open question. To gain more insight into what information LSTMs base their decisions on, we propose a generalisation of Contextual Decomposition (GCD). In particular, this setup enables us to accurately distil which part of a prediction stems from semantic heuristics, which part truly emanates from syntactic cues and which part arise from the model biases themselves instead. We investigate this technique on tasks pertaining to syntactic agreement and co-reference resolution and discover that the model strongly relies on a default reasoning effect to perform these tasks.
Anthology ID:
K19-1001
Volume:
Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)
Month:
November
Year:
2019
Address:
Hong Kong, China
Editors:
Mohit Bansal, Aline Villavicencio
Venue:
CoNLL
SIG:
SIGNLL
Publisher:
Association for Computational Linguistics
Note:
Pages:
1–11
Language:
URL:
https://aclanthology.org/K19-1001
DOI:
10.18653/v1/K19-1001
Bibkey:
Cite (ACL):
Jaap Jumelet, Willem Zuidema, and Dieuwke Hupkes. 2019. Analysing Neural Language Models: Contextual Decomposition Reveals Default Reasoning in Number and Gender Assignment. In Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL), pages 1–11, Hong Kong, China. Association for Computational Linguistics.
Cite (Informal):
Analysing Neural Language Models: Contextual Decomposition Reveals Default Reasoning in Number and Gender Assignment (Jumelet et al., CoNLL 2019)
Copy Citation:
PDF:
https://aclanthology.org/K19-1001.pdf
Attachment:
 K19-1001.Attachment.pdf
Code
 i-machine-think/diagnnose
Data
WinoBias