Improving Non-autoregressive Neural Machine Translation with Monolingual Data

Jiawei Zhou, Phillip Keung


Abstract
Non-autoregressive (NAR) neural machine translation is usually done via knowledge distillation from an autoregressive (AR) model. Under this framework, we leverage large monolingual corpora to improve the NAR model’s performance, with the goal of transferring the AR model’s generalization ability while preventing overfitting. On top of a strong NAR baseline, our experimental results on the WMT14 En-De and WMT16 En-Ro news translation tasks confirm that monolingual data augmentation consistently improves the performance of the NAR model to approach the teacher AR model’s performance, yields comparable or better results than the best non-iterative NAR methods in the literature and helps reduce overfitting in the training process.
Anthology ID:
2020.acl-main.171
Volume:
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Month:
July
Year:
2020
Address:
Online
Editors:
Dan Jurafsky, Joyce Chai, Natalie Schluter, Joel Tetreault
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1893–1898
Language:
URL:
https://aclanthology.org/2020.acl-main.171
DOI:
10.18653/v1/2020.acl-main.171
Bibkey:
Cite (ACL):
Jiawei Zhou and Phillip Keung. 2020. Improving Non-autoregressive Neural Machine Translation with Monolingual Data. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1893–1898, Online. Association for Computational Linguistics.
Cite (Informal):
Improving Non-autoregressive Neural Machine Translation with Monolingual Data (Zhou & Keung, ACL 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.acl-main.171.pdf
Video:
 http://slideslive.com/38929283