Generalizing Word Embeddings using Bag of Subwords

Jinman Zhao, Sidharth Mudgal, Yingyu Liang


Abstract
We approach the problem of generalizing pre-trained word embeddings beyond fixed-size vocabularies without using additional contextual information. We propose a subword-level word vector generation model that views words as bags of character n-grams. The model is simple, fast to train and provides good vectors for rare or unseen words. Experiments show that our model achieves state-of-the-art performances in English word similarity task and in joint prediction of part-of-speech tag and morphosyntactic attributes in 23 languages, suggesting our model’s ability in capturing the relationship between words’ textual representations and their embeddings.
Anthology ID:
D18-1059
Volume:
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing
Month:
October-November
Year:
2018
Address:
Brussels, Belgium
Editors:
Ellen Riloff, David Chiang, Julia Hockenmaier, Jun’ichi Tsujii
Venue:
EMNLP
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
601–606
Language:
URL:
https://aclanthology.org/D18-1059
DOI:
10.18653/v1/D18-1059
Bibkey:
Cite (ACL):
Jinman Zhao, Sidharth Mudgal, and Yingyu Liang. 2018. Generalizing Word Embeddings using Bag of Subwords. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 601–606, Brussels, Belgium. Association for Computational Linguistics.
Cite (Informal):
Generalizing Word Embeddings using Bag of Subwords (Zhao et al., EMNLP 2018)
Copy Citation:
PDF:
https://aclanthology.org/D18-1059.pdf
Video:
 https://aclanthology.org/D18-1059.mp4
Code
 jmzhao/bag-of-substring-embedder