Non-Adversarial Unsupervised Word Translation

Yedid Hoshen, Lior Wolf


Abstract
Unsupervised word translation from non-parallel inter-lingual corpora has attracted much research interest. Very recently, neural network methods trained with adversarial loss functions achieved high accuracy on this task. Despite the impressive success of the recent techniques, they suffer from the typical drawbacks of generative adversarial models: sensitivity to hyper-parameters, long training time and lack of interpretability. In this paper, we make the observation that two sufficiently similar distributions can be aligned correctly with iterative matching methods. We present a novel method that first aligns the second moment of the word distributions of the two languages and then iteratively refines the alignment. Extensive experiments on word translation of European and Non-European languages show that our method achieves better performance than recent state-of-the-art deep adversarial approaches and is competitive with the supervised baseline. It is also efficient, easy to parallelize on CPU and interpretable.
Anthology ID:
D18-1043
Volume:
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing
Month:
October-November
Year:
2018
Address:
Brussels, Belgium
Editors:
Ellen Riloff, David Chiang, Julia Hockenmaier, Jun’ichi Tsujii
Venue:
EMNLP
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
469–478
Language:
URL:
https://aclanthology.org/D18-1043
DOI:
10.18653/v1/D18-1043
Bibkey:
Cite (ACL):
Yedid Hoshen and Lior Wolf. 2018. Non-Adversarial Unsupervised Word Translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 469–478, Brussels, Belgium. Association for Computational Linguistics.
Cite (Informal):
Non-Adversarial Unsupervised Word Translation (Hoshen & Wolf, EMNLP 2018)
Copy Citation:
PDF:
https://aclanthology.org/D18-1043.pdf
Code
 facebookresearch/MUSE +  additional community code