CLER: Cross-task Learning with Expert Representation to Generalize Reading and Understanding

Takumi Takahashi, Motoki Taniguchi, Tomoki Taniguchi, Tomoko Ohkuma


Abstract
This paper describes our model for the reading comprehension task of the MRQA shared task. We propose CLER, which stands for Cross-task Learning with Expert Representation for the generalization of reading and understanding. To generalize its capabilities, the proposed model is composed of three key ideas: multi-task learning, mixture of experts, and ensemble. In-domain datasets are used to train and validate our model, and other out-of-domain datasets are used to validate the generalization of our model’s performances. In a submission run result, the proposed model achieved an average F1 score of 66.1 % in the out-of-domain setting, which is a 4.3 percentage point improvement over the official BERT baseline model.
Anthology ID:
D19-5824
Volume:
Proceedings of the 2nd Workshop on Machine Reading for Question Answering
Month:
November
Year:
2019
Address:
Hong Kong, China
Editors:
Adam Fisch, Alon Talmor, Robin Jia, Minjoon Seo, Eunsol Choi, Danqi Chen
Venue:
WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
183–190
Language:
URL:
https://aclanthology.org/D19-5824
DOI:
10.18653/v1/D19-5824
Bibkey:
Cite (ACL):
Takumi Takahashi, Motoki Taniguchi, Tomoki Taniguchi, and Tomoko Ohkuma. 2019. CLER: Cross-task Learning with Expert Representation to Generalize Reading and Understanding. In Proceedings of the 2nd Workshop on Machine Reading for Question Answering, pages 183–190, Hong Kong, China. Association for Computational Linguistics.
Cite (Informal):
CLER: Cross-task Learning with Expert Representation to Generalize Reading and Understanding (Takahashi et al., 2019)
Copy Citation:
PDF:
https://aclanthology.org/D19-5824.pdf