Training Hybrid Language Models by Marginalizing over Segmentations

Edouard Grave, Sainbayar Sukhbaatar, Piotr Bojanowski, Armand Joulin


Abstract
In this paper, we study the problem of hybrid language modeling, that is using models which can predict both characters and larger units such as character ngrams or words. Using such models, multiple potential segmentations usually exist for a given string, for example one using words and one using characters only. Thus, the probability of a string is the sum of the probabilities of all the possible segmentations. Here, we show how it is possible to marginalize over the segmentations efficiently, in order to compute the true probability of a sequence. We apply our technique on three datasets, comprising seven languages, showing improvements over a strong character level language model.
Anthology ID:
P19-1143
Volume:
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics
Month:
July
Year:
2019
Address:
Florence, Italy
Editors:
Anna Korhonen, David Traum, Lluís Màrquez
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1477–1482
Language:
URL:
https://aclanthology.org/P19-1143
DOI:
10.18653/v1/P19-1143
Bibkey:
Cite (ACL):
Edouard Grave, Sainbayar Sukhbaatar, Piotr Bojanowski, and Armand Joulin. 2019. Training Hybrid Language Models by Marginalizing over Segmentations. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1477–1482, Florence, Italy. Association for Computational Linguistics.
Cite (Informal):
Training Hybrid Language Models by Marginalizing over Segmentations (Grave et al., ACL 2019)
Copy Citation:
PDF:
https://aclanthology.org/P19-1143.pdf
Data
WikiText-2