Inference Strategies for Machine Translation with Conditional Masking

Julia Kreutzer, George Foster, Colin Cherry


Abstract
Conditional masked language model (CMLM) training has proven successful for non-autoregressive and semi-autoregressive sequence generation tasks, such as machine translation. Given a trained CMLM, however, it is not clear what the best inference strategy is. We formulate masked inference as a factorization of conditional probabilities of partial sequences, show that this does not harm performance, and investigate a number of simple heuristics motivated by this perspective. We identify a thresholding strategy that has advantages over the standard “mask-predict” algorithm, and provide analyses of its behavior on machine translation tasks.
Anthology ID:
2020.emnlp-main.465
Volume:
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)
Month:
November
Year:
2020
Address:
Online
Editors:
Bonnie Webber, Trevor Cohn, Yulan He, Yang Liu
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
5774–5782
Language:
URL:
https://aclanthology.org/2020.emnlp-main.465
DOI:
10.18653/v1/2020.emnlp-main.465
Bibkey:
Cite (ACL):
Julia Kreutzer, George Foster, and Colin Cherry. 2020. Inference Strategies for Machine Translation with Conditional Masking. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5774–5782, Online. Association for Computational Linguistics.
Cite (Informal):
Inference Strategies for Machine Translation with Conditional Masking (Kreutzer et al., EMNLP 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.emnlp-main.465.pdf
Video:
 https://slideslive.com/38939258