ExpBERT: Representation Engineering with Natural Language Explanations

Shikhar Murty, Pang Wei Koh, Percy Liang


Abstract
Suppose we want to specify the inductive bias that married couples typically go on honeymoons for the task of extracting pairs of spouses from text. In this paper, we allow model developers to specify these types of inductive biases as natural language explanations. We use BERT fine-tuned on MultiNLI to “interpret” these explanations with respect to the input sentence, producing explanation-guided representations of the input. Across three relation extraction tasks, our method, ExpBERT, matches a BERT baseline but with 3–20x less labeled data and improves on the baseline by 3–10 F1 points with the same amount of labeled data.
Anthology ID:
2020.acl-main.190
Volume:
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Month:
July
Year:
2020
Address:
Online
Editors:
Dan Jurafsky, Joyce Chai, Natalie Schluter, Joel Tetreault
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2106–2113
Language:
URL:
https://aclanthology.org/2020.acl-main.190
DOI:
10.18653/v1/2020.acl-main.190
Bibkey:
Cite (ACL):
Shikhar Murty, Pang Wei Koh, and Percy Liang. 2020. ExpBERT: Representation Engineering with Natural Language Explanations. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2106–2113, Online. Association for Computational Linguistics.
Cite (Informal):
ExpBERT: Representation Engineering with Natural Language Explanations (Murty et al., ACL 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.acl-main.190.pdf
Video:
 http://slideslive.com/38928962
Code
 MurtyShikhar/ExpBERT +  additional community code
Data
MultiNLI