Extracting Syntactic Trees from Transformer Encoder Self-Attentions

David Mareček, Rudolf Rosa


Abstract
This is a work in progress about extracting the sentence tree structures from the encoder’s self-attention weights, when translating into another language using the Transformer neural network architecture. We visualize the structures and discuss their characteristics with respect to the existing syntactic theories and annotations.
Anthology ID:
W18-5444
Volume:
Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP
Month:
November
Year:
2018
Address:
Brussels, Belgium
Editors:
Tal Linzen, Grzegorz Chrupała, Afra Alishahi
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
347–349
Language:
URL:
https://aclanthology.org/W18-5444
DOI:
10.18653/v1/W18-5444
Bibkey:
Cite (ACL):
David Mareček and Rudolf Rosa. 2018. Extracting Syntactic Trees from Transformer Encoder Self-Attentions. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 347–349, Brussels, Belgium. Association for Computational Linguistics.
Cite (Informal):
Extracting Syntactic Trees from Transformer Encoder Self-Attentions (Mareček & Rosa, EMNLP 2018)
Copy Citation:
PDF:
https://aclanthology.org/W18-5444.pdf
Data
Universal Dependencies