Memory, Show the Way: Memory Based Few Shot Word Representation Learning

Jingyuan Sun, Shaonan Wang, Chengqing Zong


Abstract
Distributional semantic models (DSMs) generally require sufficient examples for a word to learn a high quality representation. This is in stark contrast with human who can guess the meaning of a word from one or a few referents only. In this paper, we propose Mem2Vec, a memory based embedding learning method capable of acquiring high quality word representations from fairly limited context. Our method directly adapts the representations produced by a DSM with a longterm memory to guide its guess of a novel word. Based on a pre-trained embedding space, the proposed method delivers impressive performance on two challenging few-shot word similarity tasks. Embeddings learned with our method also lead to considerable improvements over strong baselines on NER and sentiment classification.
Anthology ID:
D18-1173
Volume:
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing
Month:
October-November
Year:
2018
Address:
Brussels, Belgium
Editors:
Ellen Riloff, David Chiang, Julia Hockenmaier, Jun’ichi Tsujii
Venue:
EMNLP
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
1435–1444
Language:
URL:
https://aclanthology.org/D18-1173
DOI:
10.18653/v1/D18-1173
Bibkey:
Cite (ACL):
Jingyuan Sun, Shaonan Wang, and Chengqing Zong. 2018. Memory, Show the Way: Memory Based Few Shot Word Representation Learning. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1435–1444, Brussels, Belgium. Association for Computational Linguistics.
Cite (Informal):
Memory, Show the Way: Memory Based Few Shot Word Representation Learning (Sun et al., EMNLP 2018)
Copy Citation:
PDF:
https://aclanthology.org/D18-1173.pdf