In Neural Machine Translation, What Does Transfer Learning Transfer?

Alham Fikri Aji, Nikolay Bogoychev, Kenneth Heafield, Rico Sennrich


Abstract
Transfer learning improves quality for low-resource machine translation, but it is unclear what exactly it transfers. We perform several ablation studies that limit information transfer, then measure the quality impact across three language pairs to gain a black-box understanding of transfer learning. Word embeddings play an important role in transfer learning, particularly if they are properly aligned. Although transfer learning can be performed without embeddings, results are sub-optimal. In contrast, transferring only the embeddings but nothing else yields catastrophic results. We then investigate diagonal alignments with auto-encoders over real languages and randomly generated sequences, finding even randomly generated sequences as parents yield noticeable but smaller gains. Finally, transfer learning can eliminate the need for a warm-up phase when training transformer models in high resource language pairs.
Anthology ID:
2020.acl-main.688
Volume:
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Month:
July
Year:
2020
Address:
Online
Editors:
Dan Jurafsky, Joyce Chai, Natalie Schluter, Joel Tetreault
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
7701–7710
Language:
URL:
https://aclanthology.org/2020.acl-main.688
DOI:
10.18653/v1/2020.acl-main.688
Bibkey:
Cite (ACL):
Alham Fikri Aji, Nikolay Bogoychev, Kenneth Heafield, and Rico Sennrich. 2020. In Neural Machine Translation, What Does Transfer Learning Transfer?. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7701–7710, Online. Association for Computational Linguistics.
Cite (Informal):
In Neural Machine Translation, What Does Transfer Learning Transfer? (Aji et al., ACL 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.acl-main.688.pdf
Video:
 http://slideslive.com/38929410