Hypernymy Detection for Low-Resource Languages via Meta Learning

Changlong Yu, Jialong Han, Haisong Zhang, Wilfred Ng


Abstract
Hypernymy detection, a.k.a, lexical entailment, is a fundamental sub-task of many natural language understanding tasks. Previous explorations mostly focus on monolingual hypernymy detection on high-resource languages, e.g., English, but few investigate the low-resource scenarios. This paper addresses the problem of low-resource hypernymy detection by combining high-resource languages. We extensively compare three joint training paradigms and for the first time propose applying meta learning to relieve the low-resource issue. Experiments demonstrate the superiority of our method among the three settings, which substantially improves the performance of extremely low-resource languages by preventing over-fitting on small datasets.
Anthology ID:
2020.acl-main.336
Volume:
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Month:
July
Year:
2020
Address:
Online
Editors:
Dan Jurafsky, Joyce Chai, Natalie Schluter, Joel Tetreault
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3651–3656
Language:
URL:
https://aclanthology.org/2020.acl-main.336
DOI:
10.18653/v1/2020.acl-main.336
Bibkey:
Cite (ACL):
Changlong Yu, Jialong Han, Haisong Zhang, and Wilfred Ng. 2020. Hypernymy Detection for Low-Resource Languages via Meta Learning. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3651–3656, Online. Association for Computational Linguistics.
Cite (Informal):
Hypernymy Detection for Low-Resource Languages via Meta Learning (Yu et al., ACL 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.acl-main.336.pdf
Video:
 http://slideslive.com/38929011