Understanding Neural Machine Translation by Simplification: The Case of Encoder-free Models

Gongbo Tang, Rico Sennrich, Joakim Nivre


Abstract
In this paper, we try to understand neural machine translation (NMT) via simplifying NMT architectures and training encoder-free NMT models. In an encoder-free model, the sums of word embeddings and positional embeddings represent the source. The decoder is a standard Transformer or recurrent neural network that directly attends to embeddings via attention mechanisms. Experimental results show (1) that the attention mechanism in encoder-free models acts as a strong feature extractor, (2) that the word embeddings in encoder-free models are competitive to those in conventional models, (3) that non-contextualized source representations lead to a big performance drop, and (4) that encoder-free models have different effects on alignment quality for German-English and Chinese-English.
Anthology ID:
R19-1136
Volume:
Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2019)
Month:
September
Year:
2019
Address:
Varna, Bulgaria
Editors:
Ruslan Mitkov, Galia Angelova
Venue:
RANLP
SIG:
Publisher:
INCOMA Ltd.
Note:
Pages:
1186–1193
Language:
URL:
https://aclanthology.org/R19-1136
DOI:
10.26615/978-954-452-056-4_136
Bibkey:
Cite (ACL):
Gongbo Tang, Rico Sennrich, and Joakim Nivre. 2019. Understanding Neural Machine Translation by Simplification: The Case of Encoder-free Models. In Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2019), pages 1186–1193, Varna, Bulgaria. INCOMA Ltd..
Cite (Informal):
Understanding Neural Machine Translation by Simplification: The Case of Encoder-free Models (Tang et al., RANLP 2019)
Copy Citation:
PDF:
https://aclanthology.org/R19-1136.pdf