Towards Robust Neural Machine Translation

Yong Cheng, Zhaopeng Tu, Fandong Meng, Junjie Zhai, Yang Liu


Abstract
Small perturbations in the input can severely distort intermediate representations and thus impact translation quality of neural machine translation (NMT) models. In this paper, we propose to improve the robustness of NMT models with adversarial stability training. The basic idea is to make both the encoder and decoder in NMT models robust against input perturbations by enabling them to behave similarly for the original input and its perturbed counterpart. Experimental results on Chinese-English, English-German and English-French translation tasks show that our approaches can not only achieve significant improvements over strong NMT systems but also improve the robustness of NMT models.
Anthology ID:
P18-1163
Volume:
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
July
Year:
2018
Address:
Melbourne, Australia
Editors:
Iryna Gurevych, Yusuke Miyao
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1756–1766
Language:
URL:
https://aclanthology.org/P18-1163
DOI:
10.18653/v1/P18-1163
Bibkey:
Cite (ACL):
Yong Cheng, Zhaopeng Tu, Fandong Meng, Junjie Zhai, and Yang Liu. 2018. Towards Robust Neural Machine Translation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1756–1766, Melbourne, Australia. Association for Computational Linguistics.
Cite (Informal):
Towards Robust Neural Machine Translation (Cheng et al., ACL 2018)
Copy Citation:
PDF:
https://aclanthology.org/P18-1163.pdf
Poster:
 P18-1163.Poster.pdf