Effective Adversarial Regularization for Neural Machine Translation

Motoki Sato, Jun Suzuki, Shun Kiyono


Abstract
A regularization technique based on adversarial perturbation, which was initially developed in the field of image processing, has been successfully applied to text classification tasks and has yielded attractive improvements. We aim to further leverage this promising methodology into more sophisticated and critical neural models in the natural language processing field, i.e., neural machine translation (NMT) models. However, it is not trivial to apply this methodology to such models. Thus, this paper investigates the effectiveness of several possible configurations of applying the adversarial perturbation and reveals that the adversarial regularization technique can significantly and consistently improve the performance of widely used NMT models, such as LSTM-based and Transformer-based models.
Anthology ID:
P19-1020
Volume:
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics
Month:
July
Year:
2019
Address:
Florence, Italy
Editors:
Anna Korhonen, David Traum, Lluís Màrquez
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
204–210
Language:
URL:
https://aclanthology.org/P19-1020
DOI:
10.18653/v1/P19-1020
Bibkey:
Cite (ACL):
Motoki Sato, Jun Suzuki, and Shun Kiyono. 2019. Effective Adversarial Regularization for Neural Machine Translation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 204–210, Florence, Italy. Association for Computational Linguistics.
Cite (Informal):
Effective Adversarial Regularization for Neural Machine Translation (Sato et al., ACL 2019)
Copy Citation:
PDF:
https://aclanthology.org/P19-1020.pdf
Supplementary:
 P19-1020.Supplementary.pdf
Video:
 https://aclanthology.org/P19-1020.mp4