%0 Conference Proceedings %T A Case Study on Learning a Unified Encoder of Relations %A Fu, Lisheng %A Min, Bonan %A Nguyen, Thien Huu %A Grishman, Ralph %Y Xu, Wei %Y Ritter, Alan %Y Baldwin, Tim %Y Rahimi, Afshin %S Proceedings of the 2018 EMNLP Workshop W-NUT: The 4th Workshop on Noisy User-generated Text %D 2018 %8 November %I Association for Computational Linguistics %C Brussels, Belgium %F fu-etal-2018-case %X Typical relation extraction models are trained on a single corpus annotated with a pre-defined relation schema. An individual corpus is often small, and the models may often be biased or overfitted to the corpus. We hypothesize that we can learn a better representation by combining multiple relation datasets. We attempt to use a shared encoder to learn the unified feature representation and to augment it with regularization by adversarial training. The additional corpora feeding the encoder can help to learn a better feature representation layer even though the relation schemas are different. We use ACE05 and ERE datasets as our case study for experiments. The multi-task model obtains significant improvement on both datasets. %R 10.18653/v1/W18-6126 %U https://aclanthology.org/W18-6126 %U https://doi.org/10.18653/v1/W18-6126 %P 202-207