Incorporating Priors with Feature Attribution on Text Classification

Frederick Liu, Besim Avci


Abstract
Feature attribution methods, proposed recently, help users interpret the predictions of complex models. Our approach integrates feature attributions into the objective function to allow machine learning practitioners to incorporate priors in model building. To demonstrate the effectiveness our technique, we apply it to two tasks: (1) mitigating unintended bias in text classifiers by neutralizing identity terms; (2) improving classifier performance in scarce data setting by forcing model to focus on toxic terms. Our approach adds an L2 distance loss between feature attributions and task-specific prior values to the objective. Our experiments show that i) a classifier trained with our technique reduces undesired model biases without a tradeoff on the original task; ii) incorporating prior helps model performance in scarce data settings.
Anthology ID:
P19-1631
Volume:
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics
Month:
July
Year:
2019
Address:
Florence, Italy
Editors:
Anna Korhonen, David Traum, Lluís Màrquez
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
6274–6283
Language:
URL:
https://aclanthology.org/P19-1631
DOI:
10.18653/v1/P19-1631
Bibkey:
Cite (ACL):
Frederick Liu and Besim Avci. 2019. Incorporating Priors with Feature Attribution on Text Classification. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6274–6283, Florence, Italy. Association for Computational Linguistics.
Cite (Informal):
Incorporating Priors with Feature Attribution on Text Classification (Liu & Avci, ACL 2019)
Copy Citation:
PDF:
https://aclanthology.org/P19-1631.pdf
Supplementary:
 P19-1631.Supplementary.pdf