Learning Variational Word Masks to Improve the Interpretability of Neural Text Classifiers

Hanjie Chen, Yangfeng Ji


Abstract
To build an interpretable neural text classifier, most of the prior work has focused on designing inherently interpretable models or finding faithful explanations. A new line of work on improving model interpretability has just started, and many existing methods require either prior information or human annotations as additional inputs in training. To address this limitation, we propose the variational word mask (VMASK) method to automatically learn task-specific important words and reduce irrelevant information on classification, which ultimately improves the interpretability of model predictions. The proposed method is evaluated with three neural text classifiers (CNN, LSTM, and BERT) on seven benchmark text classification datasets. Experiments show the effectiveness of VMASK in improving both model prediction accuracy and interpretability.
Anthology ID:
2020.emnlp-main.347
Original:
2020.emnlp-main.347v1
Version 2:
2020.emnlp-main.347v2
Volume:
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)
Month:
November
Year:
2020
Address:
Online
Editors:
Bonnie Webber, Trevor Cohn, Yulan He, Yang Liu
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4236–4251
Language:
URL:
https://aclanthology.org/2020.emnlp-main.347
DOI:
10.18653/v1/2020.emnlp-main.347
Bibkey:
Cite (ACL):
Hanjie Chen and Yangfeng Ji. 2020. Learning Variational Word Masks to Improve the Interpretability of Neural Text Classifiers. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4236–4251, Online. Association for Computational Linguistics.
Cite (Informal):
Learning Variational Word Masks to Improve the Interpretability of Neural Text Classifiers (Chen & Ji, EMNLP 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.emnlp-main.347.pdf
Video:
 https://slideslive.com/38939149
Code
 UVa-NLP/VMASK +  additional community code
Data
AG NewsIMDb Movie ReviewsSSTSST-2