Character-Word LSTM Language Models

Lyan Verwimp, Joris Pelemans, Hugo Van hamme, Patrick Wambacq


Abstract
We present a Character-Word Long Short-Term Memory Language Model which both reduces the perplexity with respect to a baseline word-level language model and reduces the number of parameters of the model. Character information can reveal structural (dis)similarities between words and can even be used when a word is out-of-vocabulary, thus improving the modeling of infrequent and unknown words. By concatenating word and character embeddings, we achieve up to 2.77% relative improvement on English compared to a baseline model with a similar amount of parameters and 4.57% on Dutch. Moreover, we also outperform baseline word-level models with a larger number of parameters.
Anthology ID:
E17-1040
Volume:
Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
Month:
April
Year:
2017
Address:
Valencia, Spain
Editors:
Mirella Lapata, Phil Blunsom, Alexander Koller
Venue:
EACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
417–427
Language:
URL:
https://aclanthology.org/E17-1040
DOI:
Bibkey:
Cite (ACL):
Lyan Verwimp, Joris Pelemans, Hugo Van hamme, and Patrick Wambacq. 2017. Character-Word LSTM Language Models. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 417–427, Valencia, Spain. Association for Computational Linguistics.
Cite (Informal):
Character-Word LSTM Language Models (Verwimp et al., EACL 2017)
Copy Citation:
PDF:
https://aclanthology.org/E17-1040.pdf