Bridging Text and Knowledge with Multi-Prototype Embedding for Few-Shot Relational Triple Extraction

Haiyang Yu, Ningyu Zhang, Shumin Deng, Hongbin Ye, Wei Zhang, Huajun Chen


Abstract
Current supervised relational triple extraction approaches require huge amounts of labeled data and thus suffer from poor performance in few-shot settings. However, people can grasp new knowledge by learning a few instances. To this end, we take the first step to study the few-shot relational triple extraction, which has not been well understood. Unlike previous single-task few-shot problems, relational triple extraction is more challenging as the entities and relations have implicit correlations. In this paper, We propose a novel multi-prototype embedding network model to jointly extract the composition of relational triples, namely, entity pairs and corresponding relations. To be specific, we design a hybrid prototypical learning mechanism that bridges text and knowledge concerning both entities and relations. Thus, implicit correlations between entities and relations are injected. Additionally, we propose a prototype-aware regularization to learn more representative prototypes. Experimental results demonstrate that the proposed method can improve the performance of the few-shot triple extraction.
Anthology ID:
2020.coling-main.563
Volume:
Proceedings of the 28th International Conference on Computational Linguistics
Month:
December
Year:
2020
Address:
Barcelona, Spain (Online)
Editors:
Donia Scott, Nuria Bel, Chengqing Zong
Venue:
COLING
SIG:
Publisher:
International Committee on Computational Linguistics
Note:
Pages:
6399–6410
Language:
URL:
https://aclanthology.org/2020.coling-main.563
DOI:
10.18653/v1/2020.coling-main.563
Bibkey:
Cite (ACL):
Haiyang Yu, Ningyu Zhang, Shumin Deng, Hongbin Ye, Wei Zhang, and Huajun Chen. 2020. Bridging Text and Knowledge with Multi-Prototype Embedding for Few-Shot Relational Triple Extraction. In Proceedings of the 28th International Conference on Computational Linguistics, pages 6399–6410, Barcelona, Spain (Online). International Committee on Computational Linguistics.
Cite (Informal):
Bridging Text and Knowledge with Multi-Prototype Embedding for Few-Shot Relational Triple Extraction (Yu et al., COLING 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.coling-main.563.pdf
Data
FewRel