Document-Level Adaptation for Neural Machine Translation

Sachith Sri Ram Kothur, Rebecca Knowles, Philipp Koehn


Abstract
It is common practice to adapt machine translation systems to novel domains, but even a well-adapted system may be able to perform better on a particular document if it were to learn from a translator’s corrections within the document itself. We focus on adaptation within a single document – appropriate for an interactive translation scenario where a model adapts to a human translator’s input over the course of a document. We propose two methods: single-sentence adaptation (which performs online adaptation one sentence at a time) and dictionary adaptation (which specifically addresses the issue of translating novel words). Combining the two models results in improvements over both approaches individually, and over baseline systems, even on short documents. On WMT news test data, we observe an improvement of +1.8 BLEU points and +23.3% novel word translation accuracy and on EMEA data (descriptions of medications) we observe an improvement of +2.7 BLEU points and +49.2% novel word translation accuracy.
Anthology ID:
W18-2708
Volume:
Proceedings of the 2nd Workshop on Neural Machine Translation and Generation
Month:
July
Year:
2018
Address:
Melbourne, Australia
Editors:
Alexandra Birch, Andrew Finch, Thang Luong, Graham Neubig, Yusuke Oda
Venue:
NGT
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
64–73
Language:
URL:
https://aclanthology.org/W18-2708
DOI:
10.18653/v1/W18-2708
Bibkey:
Cite (ACL):
Sachith Sri Ram Kothur, Rebecca Knowles, and Philipp Koehn. 2018. Document-Level Adaptation for Neural Machine Translation. In Proceedings of the 2nd Workshop on Neural Machine Translation and Generation, pages 64–73, Melbourne, Australia. Association for Computational Linguistics.
Cite (Informal):
Document-Level Adaptation for Neural Machine Translation (Kothur et al., NGT 2018)
Copy Citation:
PDF:
https://aclanthology.org/W18-2708.pdf