Zero-Shot Information Extraction as a Unified Text-to-Triple Translation

Chenguang Wang, Xiao Liu, Zui Chen, Haoyun Hong, Jie Tang, Dawn Song


Abstract
We cast a suite of information extraction tasks into a text-to-triple translation framework. Instead of solving each task relying on task-specific datasets and models, we formalize the task as a translation between task-specific input text and output triples. By taking the task-specific input, we enable a task-agnostic translation by leveraging the latent knowledge that a pre-trained language model has about the task. We further demonstrate that a simple pre-training task of predicting which relational information corresponds to which input text is an effective way to produce task-specific outputs. This enables the zero-shot transfer of our framework to downstream tasks. We study the zero-shot performance of this framework on open information extraction (OIE2016, NYT, WEB, PENN), relation classification (FewRel and TACRED), and factual probe (Google-RE and T-REx). The model transfers non-trivially to most tasks and is often competitive with a fully supervised method without the need for any task-specific training. For instance, we significantly outperform the F1 score of the supervised open information extraction without needing to use its training set.
Anthology ID:
2021.emnlp-main.94
Volume:
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2021
Address:
Online and Punta Cana, Dominican Republic
Editors:
Marie-Francine Moens, Xuanjing Huang, Lucia Specia, Scott Wen-tau Yih
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1225–1238
Language:
URL:
https://aclanthology.org/2021.emnlp-main.94
DOI:
10.18653/v1/2021.emnlp-main.94
Bibkey:
Cite (ACL):
Chenguang Wang, Xiao Liu, Zui Chen, Haoyun Hong, Jie Tang, and Dawn Song. 2021. Zero-Shot Information Extraction as a Unified Text-to-Triple Translation. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 1225–1238, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Cite (Informal):
Zero-Shot Information Extraction as a Unified Text-to-Triple Translation (Wang et al., EMNLP 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.emnlp-main.94.pdf
Video:
 https://aclanthology.org/2021.emnlp-main.94.mp4
Code
 cgraywang/deepex
Data
FewRelNew York Times Annotated CorpusOIE2016Penn TreebankTACRED