A Case Study on Learning a Unified Encoder of Relations

Lisheng Fu, Bonan Min, Thien Huu Nguyen, Ralph Grishman


Abstract
Typical relation extraction models are trained on a single corpus annotated with a pre-defined relation schema. An individual corpus is often small, and the models may often be biased or overfitted to the corpus. We hypothesize that we can learn a better representation by combining multiple relation datasets. We attempt to use a shared encoder to learn the unified feature representation and to augment it with regularization by adversarial training. The additional corpora feeding the encoder can help to learn a better feature representation layer even though the relation schemas are different. We use ACE05 and ERE datasets as our case study for experiments. The multi-task model obtains significant improvement on both datasets.
Anthology ID:
W18-6126
Volume:
Proceedings of the 2018 EMNLP Workshop W-NUT: The 4th Workshop on Noisy User-generated Text
Month:
November
Year:
2018
Address:
Brussels, Belgium
Editors:
Wei Xu, Alan Ritter, Tim Baldwin, Afshin Rahimi
Venue:
WNUT
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
202–207
Language:
URL:
https://aclanthology.org/W18-6126
DOI:
10.18653/v1/W18-6126
Bibkey:
Cite (ACL):
Lisheng Fu, Bonan Min, Thien Huu Nguyen, and Ralph Grishman. 2018. A Case Study on Learning a Unified Encoder of Relations. In Proceedings of the 2018 EMNLP Workshop W-NUT: The 4th Workshop on Noisy User-generated Text, pages 202–207, Brussels, Belgium. Association for Computational Linguistics.
Cite (Informal):
A Case Study on Learning a Unified Encoder of Relations (Fu et al., WNUT 2018)
Copy Citation:
PDF:
https://aclanthology.org/W18-6126.pdf