Learning to Stop in Structured Prediction for Neural Machine Translation

Mingbo Ma, Renjie Zheng, Liang Huang


Abstract
Beam search optimization (Wiseman and Rush, 2016) resolves many issues in neural machine translation. However, this method lacks principled stopping criteria and does not learn how to stop during training, and the model naturally prefers longer hypotheses during the testing time in practice since they use the raw score instead of the probability-based score. We propose a novel ranking method which enables an optimal beam search stop- ping criteria. We further introduce a structured prediction loss function which penalizes suboptimal finished candidates produced by beam search during training. Experiments of neural machine translation on both synthetic data and real languages (German→English and Chinese→English) demonstrate our pro- posed methods lead to better length and BLEU score.
Anthology ID:
N19-1187
Volume:
Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)
Month:
June
Year:
2019
Address:
Minneapolis, Minnesota
Editors:
Jill Burstein, Christy Doran, Thamar Solorio
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1884–1889
Language:
URL:
https://aclanthology.org/N19-1187
DOI:
10.18653/v1/N19-1187
Bibkey:
Cite (ACL):
Mingbo Ma, Renjie Zheng, and Liang Huang. 2019. Learning to Stop in Structured Prediction for Neural Machine Translation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1884–1889, Minneapolis, Minnesota. Association for Computational Linguistics.
Cite (Informal):
Learning to Stop in Structured Prediction for Neural Machine Translation (Ma et al., NAACL 2019)
Copy Citation:
PDF:
https://aclanthology.org/N19-1187.pdf