Bayesian Compression for Natural Language Processing

Nadezhda Chirkova, Ekaterina Lobacheva, Dmitry Vetrov


Abstract
In natural language processing, a lot of the tasks are successfully solved with recurrent neural networks, but such models have a huge number of parameters. The majority of these parameters are often concentrated in the embedding layer, which size grows proportionally to the vocabulary length. We propose a Bayesian sparsification technique for RNNs which allows compressing the RNN dozens or hundreds of times without time-consuming hyperparameters tuning. We also generalize the model for vocabulary sparsification to filter out unnecessary words and compress the RNN even further. We show that the choice of the kept words is interpretable.
Anthology ID:
D18-1319
Volume:
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing
Month:
October-November
Year:
2018
Address:
Brussels, Belgium
Editors:
Ellen Riloff, David Chiang, Julia Hockenmaier, Jun’ichi Tsujii
Venue:
EMNLP
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
2910–2915
Language:
URL:
https://aclanthology.org/D18-1319
DOI:
10.18653/v1/D18-1319
Bibkey:
Cite (ACL):
Nadezhda Chirkova, Ekaterina Lobacheva, and Dmitry Vetrov. 2018. Bayesian Compression for Natural Language Processing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2910–2915, Brussels, Belgium. Association for Computational Linguistics.
Cite (Informal):
Bayesian Compression for Natural Language Processing (Chirkova et al., EMNLP 2018)
Copy Citation:
PDF:
https://aclanthology.org/D18-1319.pdf
Attachment:
 D18-1319.Attachment.zip
Code
 tipt0p/SparseBayesianRNN +  additional community code
Data
IMDb Movie Reviews