How Many Languages Can a Language Model Model?

Robert Östling


Abstract
One of the purposes of the VarDial workshop series is to encourage research into NLP methods that treat human languages as a continuum, by designing models that exploit the similarities between languages and variants. In my work, I am using a continuous vector representation of languages that allows modeling and exploring the language continuum in a very direct way. The basic tool for this is a character-based recurrent neural network language model conditioned on language vectors whose values are learned during training. By feeding the model Bible translations in a thousand languages, not only does the learned vector space capture language similarity, but by interpolating between the learned vectors it is possible to generate text in unattested intermediate forms between the training languages.
Anthology ID:
W16-4808
Volume:
Proceedings of the Third Workshop on NLP for Similar Languages, Varieties and Dialects (VarDial3)
Month:
December
Year:
2016
Address:
Osaka, Japan
Editors:
Preslav Nakov, Marcos Zampieri, Liling Tan, Nikola Ljubešić, Jörg Tiedemann, Shervin Malmasi
Venue:
VarDial
SIG:
Publisher:
The COLING 2016 Organizing Committee
Note:
Pages:
62
Language:
URL:
https://aclanthology.org/W16-4808
DOI:
Bibkey:
Cite (ACL):
Robert Östling. 2016. How Many Languages Can a Language Model Model?. In Proceedings of the Third Workshop on NLP for Similar Languages, Varieties and Dialects (VarDial3), page 62, Osaka, Japan. The COLING 2016 Organizing Committee.
Cite (Informal):
How Many Languages Can a Language Model Model? (Östling, VarDial 2016)
Copy Citation:
PDF:
https://aclanthology.org/W16-4808.pdf