Dissecting Contextual Word Embeddings: Architecture and Representation

Matthew E. Peters, Mark Neumann, Luke Zettlemoyer, Wen-tau Yih


Abstract
Contextual word representations derived from pre-trained bidirectional language models (biLMs) have recently been shown to provide significant improvements to the state of the art for a wide range of NLP tasks. However, many questions remain as to how and why these models are so effective. In this paper, we present a detailed empirical study of how the choice of neural architecture (e.g. LSTM, CNN, or self attention) influences both end task accuracy and qualitative properties of the representations that are learned. We show there is a tradeoff between speed and accuracy, but all architectures learn high quality contextual representations that outperform word embeddings for four challenging NLP tasks. Additionally, all architectures learn representations that vary with network depth, from exclusively morphological based at the word embedding layer through local syntax based in the lower contextual layers to longer range semantics such coreference at the upper layers. Together, these results suggest that unsupervised biLMs, independent of architecture, are learning much more about the structure of language than previously appreciated.
Anthology ID:
D18-1179
Volume:
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing
Month:
October-November
Year:
2018
Address:
Brussels, Belgium
Editors:
Ellen Riloff, David Chiang, Julia Hockenmaier, Jun’ichi Tsujii
Venue:
EMNLP
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
1499–1509
Language:
URL:
https://aclanthology.org/D18-1179
DOI:
10.18653/v1/D18-1179
Bibkey:
Cite (ACL):
Matthew E. Peters, Mark Neumann, Luke Zettlemoyer, and Wen-tau Yih. 2018. Dissecting Contextual Word Embeddings: Architecture and Representation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1499–1509, Brussels, Belgium. Association for Computational Linguistics.
Cite (Informal):
Dissecting Contextual Word Embeddings: Architecture and Representation (Peters et al., EMNLP 2018)
Copy Citation:
PDF:
https://aclanthology.org/D18-1179.pdf
Attachment:
 D18-1179.Attachment.pdf
Data
Billion Word BenchmarkCoNLL 2003MultiNLIOntoNotes 5.0Penn Treebank