Towards Accurate and Consistent Evaluation: A Dataset for Distantly-Supervised Relation Extraction

Tong Zhu, Haitao Wang, Junjie Yu, Xiabing Zhou, Wenliang Chen, Wei Zhang, Min Zhang


Abstract
In recent years, distantly-supervised relation extraction has achieved a certain success by using deep neural networks. Distant Supervision (DS) can automatically generate large-scale annotated data by aligning entity pairs from Knowledge Bases (KB) to sentences. However, these DS-generated datasets inevitably have wrong labels that result in incorrect evaluation scores during testing, which may mislead the researchers. To solve this problem, we build a new dataset NYTH, where we use the DS-generated data as training data and hire annotators to label test data. Compared with the previous datasets, NYT-H has a much larger test set and then we can perform more accurate and consistent evaluation. Finally, we present the experimental results of several widely used systems on NYT-H. The experimental results show that the ranking lists of the comparison systems on the DS-labelled test data and human-annotated test data are different. This indicates that our human-annotated data is necessary for evaluation of distantly-supervised relation extraction.
Anthology ID:
2020.coling-main.566
Volume:
Proceedings of the 28th International Conference on Computational Linguistics
Month:
December
Year:
2020
Address:
Barcelona, Spain (Online)
Editors:
Donia Scott, Nuria Bel, Chengqing Zong
Venue:
COLING
SIG:
Publisher:
International Committee on Computational Linguistics
Note:
Pages:
6436–6447
Language:
URL:
https://aclanthology.org/2020.coling-main.566
DOI:
10.18653/v1/2020.coling-main.566
Bibkey:
Cite (ACL):
Tong Zhu, Haitao Wang, Junjie Yu, Xiabing Zhou, Wenliang Chen, Wei Zhang, and Min Zhang. 2020. Towards Accurate and Consistent Evaluation: A Dataset for Distantly-Supervised Relation Extraction. In Proceedings of the 28th International Conference on Computational Linguistics, pages 6436–6447, Barcelona, Spain (Online). International Committee on Computational Linguistics.
Cite (Informal):
Towards Accurate and Consistent Evaluation: A Dataset for Distantly-Supervised Relation Extraction (Zhu et al., COLING 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.coling-main.566.pdf
Code
 Spico197/NYT-H
Data
NYT-HSemEval-2010 Task-8TACRED