Using Priming to Uncover the Organization of Syntactic Representations in Neural Language Models

Grusha Prasad, Marten van Schijndel, Tal Linzen


Abstract
Neural language models (LMs) perform well on tasks that require sensitivity to syntactic structure. Drawing on the syntactic priming paradigm from psycholinguistics, we propose a novel technique to analyze the representations that enable such success. By establishing a gradient similarity metric between structures, this technique allows us to reconstruct the organization of the LMs’ syntactic representational space. We use this technique to demonstrate that LSTM LMs’ representations of different types of sentences with relative clauses are organized hierarchically in a linguistically interpretable manner, suggesting that the LMs track abstract properties of the sentence.
Anthology ID:
K19-1007
Volume:
Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)
Month:
November
Year:
2019
Address:
Hong Kong, China
Editors:
Mohit Bansal, Aline Villavicencio
Venue:
CoNLL
SIG:
SIGNLL
Publisher:
Association for Computational Linguistics
Note:
Pages:
66–76
Language:
URL:
https://aclanthology.org/K19-1007
DOI:
10.18653/v1/K19-1007
Bibkey:
Cite (ACL):
Grusha Prasad, Marten van Schijndel, and Tal Linzen. 2019. Using Priming to Uncover the Organization of Syntactic Representations in Neural Language Models. In Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL), pages 66–76, Hong Kong, China. Association for Computational Linguistics.
Cite (Informal):
Using Priming to Uncover the Organization of Syntactic Representations in Neural Language Models (Prasad et al., CoNLL 2019)
Copy Citation:
PDF:
https://aclanthology.org/K19-1007.pdf
Code
 grushaprasad/RNN-Priming