Speeding Up Neural Machine Translation Decoding by Cube Pruning

Wen Zhang, Liang Huang, Yang Feng, Lei Shen, Qun Liu


Abstract
Although neural machine translation has achieved promising results, it suffers from slow translation speed. The direct consequence is that a trade-off has to be made between translation quality and speed, thus its performance can not come into full play. We apply cube pruning, a popular technique to speed up dynamic programming, into neural machine translation to speed up the translation. To construct the equivalence class, similar target hidden states are combined, leading to less RNN expansion operations on the target side and less softmax operations over the large target vocabulary. The experiments show that, at the same or even better translation quality, our method can translate faster compared with naive beam search by 3.3x on GPUs and 3.5x on CPUs.
Anthology ID:
D18-1460
Volume:
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing
Month:
October-November
Year:
2018
Address:
Brussels, Belgium
Editors:
Ellen Riloff, David Chiang, Julia Hockenmaier, Jun’ichi Tsujii
Venue:
EMNLP
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
4284–4294
Language:
URL:
https://aclanthology.org/D18-1460
DOI:
10.18653/v1/D18-1460
Bibkey:
Cite (ACL):
Wen Zhang, Liang Huang, Yang Feng, Lei Shen, and Qun Liu. 2018. Speeding Up Neural Machine Translation Decoding by Cube Pruning. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4284–4294, Brussels, Belgium. Association for Computational Linguistics.
Cite (Informal):
Speeding Up Neural Machine Translation Decoding by Cube Pruning (Zhang et al., EMNLP 2018)
Copy Citation:
PDF:
https://aclanthology.org/D18-1460.pdf
Video:
 https://aclanthology.org/D18-1460.mp4