Sparse Non-negative Matrix Language Modeling

Joris Pelemans, Noam Shazeer, Ciprian Chelba


Abstract
We present Sparse Non-negative Matrix (SNM) estimation, a novel probability estimation technique for language modeling that can efficiently incorporate arbitrary features. We evaluate SNM language models on two corpora: the One Billion Word Benchmark and a subset of the LDC English Gigaword corpus. Results show that SNM language models trained with n-gram features are a close match for the well-established Kneser-Ney models. The addition of skip-gram features yields a model that is in the same league as the state-of-the-art recurrent neural network language models, as well as complementary: combining the two modeling techniques yields the best known result on the One Billion Word Benchmark. On the Gigaword corpus further improvements are observed using features that cross sentence boundaries. The computational advantages of SNM estimation over both maximum entropy and neural network estimation are probably its main strength, promising an approach that has large flexibility in combining arbitrary features and yet scales gracefully to large amounts of data.
Anthology ID:
Q16-1024
Volume:
Transactions of the Association for Computational Linguistics, Volume 4
Month:
Year:
2016
Address:
Cambridge, MA
Editors:
Lillian Lee, Mark Johnson, Kristina Toutanova
Venue:
TACL
SIG:
Publisher:
MIT Press
Note:
Pages:
329–342
Language:
URL:
https://aclanthology.org/Q16-1024
DOI:
10.1162/tacl_a_00102
Bibkey:
Cite (ACL):
Joris Pelemans, Noam Shazeer, and Ciprian Chelba. 2016. Sparse Non-negative Matrix Language Modeling. Transactions of the Association for Computational Linguistics, 4:329–342.
Cite (Informal):
Sparse Non-negative Matrix Language Modeling (Pelemans et al., TACL 2016)
Copy Citation:
PDF:
https://aclanthology.org/Q16-1024.pdf
Video:
 https://aclanthology.org/Q16-1024.mp4
Data
Billion Word BenchmarkOne Billion Word Benchmark