Iterative Recursive Attention Model for Interpretable Sequence Classification

Martin Tutek, Jan Šnajder


Abstract
Natural language processing has greatly benefited from the introduction of the attention mechanism. However, standard attention models are of limited interpretability for tasks that involve a series of inference steps. We describe an iterative recursive attention model, which constructs incremental representations of input data through reusing results of previously computed queries. We train our model on sentiment classification datasets and demonstrate its capacity to identify and combine different aspects of the input in an easily interpretable manner, while obtaining performance close to the state of the art.
Anthology ID:
W18-5427
Volume:
Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP
Month:
November
Year:
2018
Address:
Brussels, Belgium
Editors:
Tal Linzen, Grzegorz Chrupała, Afra Alishahi
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
249–257
Language:
URL:
https://aclanthology.org/W18-5427
DOI:
10.18653/v1/W18-5427
Bibkey:
Cite (ACL):
Martin Tutek and Jan Šnajder. 2018. Iterative Recursive Attention Model for Interpretable Sequence Classification. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 249–257, Brussels, Belgium. Association for Computational Linguistics.
Cite (Informal):
Iterative Recursive Attention Model for Interpretable Sequence Classification (Tutek & Šnajder, EMNLP 2018)
Copy Citation:
PDF:
https://aclanthology.org/W18-5427.pdf
Data
IMDb Movie ReviewsSSTSST-5