Revisiting the Weaknesses of Reinforcement Learning for Neural Machine Translation

Samuel Kiegeland, Julia Kreutzer


Abstract
Policy gradient algorithms have found wide adoption in NLP, but have recently become subject to criticism, doubting their suitability for NMT. Choshen et al. (2020) identify multiple weaknesses and suspect that their success is determined by the shape of output distributions rather than the reward. In this paper, we revisit these claims and study them under a wider range of configurations. Our experiments on in-domain and cross-domain adaptation reveal the importance of exploration and reward scaling, and provide empirical counter-evidence to these claims.
Anthology ID:
2021.naacl-main.133
Volume:
Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
Month:
June
Year:
2021
Address:
Online
Editors:
Kristina Toutanova, Anna Rumshisky, Luke Zettlemoyer, Dilek Hakkani-Tur, Iz Beltagy, Steven Bethard, Ryan Cotterell, Tanmoy Chakraborty, Yichao Zhou
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1673–1681
Language:
URL:
https://aclanthology.org/2021.naacl-main.133
DOI:
10.18653/v1/2021.naacl-main.133
Bibkey:
Cite (ACL):
Samuel Kiegeland and Julia Kreutzer. 2021. Revisiting the Weaknesses of Reinforcement Learning for Neural Machine Translation. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1673–1681, Online. Association for Computational Linguistics.
Cite (Informal):
Revisiting the Weaknesses of Reinforcement Learning for Neural Machine Translation (Kiegeland & Kreutzer, NAACL 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.naacl-main.133.pdf
Video:
 https://aclanthology.org/2021.naacl-main.133.mp4
Code
 samuki/reinforce-joey