Portable, layer-wise task performance monitoring for NLP models

Tom Lippincott


Abstract
There is a long-standing interest in understanding the internal behavior of neural networks. Deep neural architectures for natural language processing (NLP) are often accompanied by explanations for their effectiveness, from general observations (e.g. RNNs can represent unbounded dependencies in a sequence) to specific arguments about linguistic phenomena (early layers encode lexical information, deeper layers syntactic). The recent ascendancy of DNNs is fueling efforts in the NLP community to explore these claims. Previous work has tended to focus on easily-accessible representations like word or sentence embeddings, with deeper structure requiring more ad hoc methods to extract and examine. In this work, we introduce Vivisect, a toolkit that aims at a general solution for broad and fine-grained monitoring in the major DNN frameworks, with minimal change to research patterns.
Anthology ID:
W18-5445
Volume:
Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP
Month:
November
Year:
2018
Address:
Brussels, Belgium
Editors:
Tal Linzen, Grzegorz Chrupała, Afra Alishahi
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
350–352
Language:
URL:
https://aclanthology.org/W18-5445
DOI:
10.18653/v1/W18-5445
Bibkey:
Cite (ACL):
Tom Lippincott. 2018. Portable, layer-wise task performance monitoring for NLP models. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 350–352, Brussels, Belgium. Association for Computational Linguistics.
Cite (Informal):
Portable, layer-wise task performance monitoring for NLP models (Lippincott, EMNLP 2018)
Copy Citation:
PDF:
https://aclanthology.org/W18-5445.pdf