Extremely low-resource machine translation for closely related languages

Maali Tars, Andre Tättar, Mark Fišel


Abstract
An effective method to improve extremely low-resource neural machine translation is multilingual training, which can be improved by leveraging monolingual data to create synthetic bilingual corpora using the back-translation method. This work focuses on closely related languages from the Uralic language family: from Estonian and Finnish geographical regions. We find that multilingual learning and synthetic corpora increase the translation quality in every language pair for which we have data. We show that transfer learning and fine-tuning are very effective for doing low-resource machine translation and achieve the best results. We collected new parallel data for Võro, North and South Saami and present first results of neural machine translation for these languages.
Anthology ID:
2021.nodalida-main.5
Volume:
Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa)
Month:
May 31--2 June
Year:
2021
Address:
Reykjavik, Iceland (Online)
Editors:
Simon Dobnik, Lilja Øvrelid
Venue:
NoDaLiDa
SIG:
Publisher:
Linköping University Electronic Press, Sweden
Note:
Pages:
41–52
Language:
URL:
https://aclanthology.org/2021.nodalida-main.5
DOI:
Bibkey:
Cite (ACL):
Maali Tars, Andre Tättar, and Mark Fišel. 2021. Extremely low-resource machine translation for closely related languages. In Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa), pages 41–52, Reykjavik, Iceland (Online). Linköping University Electronic Press, Sweden.
Cite (Informal):
Extremely low-resource machine translation for closely related languages (Tars et al., NoDaLiDa 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.nodalida-main.5.pdf