Relation Extraction with Explanation

Hamed Shahbazi, Xiaoli Fern, Reza Ghaeini, Prasad Tadepalli


Abstract
Recent neural models for relation extraction with distant supervision alleviate the impact of irrelevant sentences in a bag by learning importance weights for the sentences. Efforts thus far have focused on improving extraction accuracy but little is known about their explanability. In this work we annotate a test set with ground-truth sentence-level explanations to evaluate the quality of explanations afforded by the relation extraction models. We demonstrate that replacing the entity mentions in the sentences with their fine-grained entity types not only enhances extraction accuracy but also improves explanation. We also propose to automatically generate “distractor” sentences to augment the bags and train the model to ignore the distractors. Evaluations on the widely used FB-NYT dataset show that our methods achieve new state-of-the-art accuracy while improving model explanability.
Anthology ID:
2020.acl-main.579
Volume:
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Month:
July
Year:
2020
Address:
Online
Editors:
Dan Jurafsky, Joyce Chai, Natalie Schluter, Joel Tetreault
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
6488–6494
Language:
URL:
https://aclanthology.org/2020.acl-main.579
DOI:
10.18653/v1/2020.acl-main.579
Bibkey:
Cite (ACL):
Hamed Shahbazi, Xiaoli Fern, Reza Ghaeini, and Prasad Tadepalli. 2020. Relation Extraction with Explanation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6488–6494, Online. Association for Computational Linguistics.
Cite (Informal):
Relation Extraction with Explanation (Shahbazi et al., ACL 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.acl-main.579.pdf
Video:
 http://slideslive.com/38929354