Residual Stacking of RNNs for Neural Machine Translation

Raphael Shu, Akiva Miura


Abstract
To enhance Neural Machine Translation models, several obvious ways such as enlarging the hidden size of recurrent layers and stacking multiple layers of RNN can be considered. Surprisingly, we observe that using naively stacked RNNs in the decoder slows down the training and leads to degradation in performance. In this paper, We demonstrate that applying residual connections in the depth of stacked RNNs can help the optimization, which is referred to as residual stacking. In empirical evaluation, residual stacking of decoder RNNs gives superior results compared to other methods of enhancing the model with a fixed parameter budget. Our submitted systems in WAT2016 are based on a NMT model ensemble with residual stacking in the decoder. To further improve the performance, we also attempt various methods of system combination in our experiments.
Anthology ID:
W16-4623
Volume:
Proceedings of the 3rd Workshop on Asian Translation (WAT2016)
Month:
December
Year:
2016
Address:
Osaka, Japan
Editors:
Toshiaki Nakazawa, Hideya Mino, Chenchen Ding, Isao Goto, Graham Neubig, Sadao Kurohashi, Ir. Hammam Riza, Pushpak Bhattacharyya
Venue:
WAT
SIG:
Publisher:
The COLING 2016 Organizing Committee
Note:
Pages:
223–229
Language:
URL:
https://aclanthology.org/W16-4623
DOI:
Bibkey:
Cite (ACL):
Raphael Shu and Akiva Miura. 2016. Residual Stacking of RNNs for Neural Machine Translation. In Proceedings of the 3rd Workshop on Asian Translation (WAT2016), pages 223–229, Osaka, Japan. The COLING 2016 Organizing Committee.
Cite (Informal):
Residual Stacking of RNNs for Neural Machine Translation (Shu & Miura, WAT 2016)
Copy Citation:
PDF:
https://aclanthology.org/W16-4623.pdf
Data
ASPEC