Boosting Neural Machine Translation

Dakun Zhang, Jungi Kim, Josep Crego, Jean Senellart


Abstract
Training efficiency is one of the main problems for Neural Machine Translation (NMT). Deep networks need for very large data as well as many training iterations to achieve state-of-the-art performance. This results in very high computation cost, slowing down research and industrialisation. In this paper, we propose to alleviate this problem with several training methods based on data boosting and bootstrap with no modifications to the neural network. It imitates the learning process of humans, which typically spend more time when learning “difficult” concepts than easier ones. We experiment on an English-French translation task showing accuracy improvements of up to 1.63 BLEU while saving 20% of training time.
Anthology ID:
I17-2046
Volume:
Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers)
Month:
November
Year:
2017
Address:
Taipei, Taiwan
Editors:
Greg Kondrak, Taro Watanabe
Venue:
IJCNLP
SIG:
Publisher:
Asian Federation of Natural Language Processing
Note:
Pages:
271–276
Language:
URL:
https://aclanthology.org/I17-2046
DOI:
Bibkey:
Cite (ACL):
Dakun Zhang, Jungi Kim, Josep Crego, and Jean Senellart. 2017. Boosting Neural Machine Translation. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 271–276, Taipei, Taiwan. Asian Federation of Natural Language Processing.
Cite (Informal):
Boosting Neural Machine Translation (Zhang et al., IJCNLP 2017)
Copy Citation:
PDF:
https://aclanthology.org/I17-2046.pdf