Simple Recurrent Units for Highly Parallelizable Recurrence

Tao Lei, Yu Zhang, Sida I. Wang, Hui Dai, Yoav Artzi


Abstract
Common recurrent neural architectures scale poorly due to the intrinsic difficulty in parallelizing their state computations. In this work, we propose the Simple Recurrent Unit (SRU), a light recurrent unit that balances model capacity and scalability. SRU is designed to provide expressive recurrence, enable highly parallelized implementation, and comes with careful initialization to facilitate training of deep models. We demonstrate the effectiveness of SRU on multiple NLP tasks. SRU achieves 5—9x speed-up over cuDNN-optimized LSTM on classification and question answering datasets, and delivers stronger results than LSTM and convolutional models. We also obtain an average of 0.7 BLEU improvement over the Transformer model (Vaswani et al., 2017) on translation by incorporating SRU into the architecture.
Anthology ID:
D18-1477
Volume:
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing
Month:
October-November
Year:
2018
Address:
Brussels, Belgium
Editors:
Ellen Riloff, David Chiang, Julia Hockenmaier, Jun’ichi Tsujii
Venue:
EMNLP
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
4470–4481
Language:
URL:
https://aclanthology.org/D18-1477
DOI:
10.18653/v1/D18-1477
Bibkey:
Cite (ACL):
Tao Lei, Yu Zhang, Sida I. Wang, Hui Dai, and Yoav Artzi. 2018. Simple Recurrent Units for Highly Parallelizable Recurrence. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4470–4481, Brussels, Belgium. Association for Computational Linguistics.
Cite (Informal):
Simple Recurrent Units for Highly Parallelizable Recurrence (Lei et al., EMNLP 2018)
Copy Citation:
PDF:
https://aclanthology.org/D18-1477.pdf
Attachment:
 D18-1477.Attachment.pdf
Code
 asappresearch/sru +  additional community code
Data
MPQA Opinion CorpusSQuADSSTWMT 2014