Learning Multilingual Topics from Incomparable Corpora

Shudong Hao, Michael J. Paul


Abstract
Multilingual topic models enable crosslingual tasks by extracting consistent topics from multilingual corpora. Most models require parallel or comparable training corpora, which limits their ability to generalize. In this paper, we first demystify the knowledge transfer mechanism behind multilingual topic models by defining an alternative but equivalent formulation. Based on this analysis, we then relax the assumption of training data required by most existing models, creating a model that only requires a dictionary for training. Experiments show that our new method effectively learns coherent multilingual topics from partially and fully incomparable corpora with limited amounts of dictionary resources.
Anthology ID:
C18-1220
Volume:
Proceedings of the 27th International Conference on Computational Linguistics
Month:
August
Year:
2018
Address:
Santa Fe, New Mexico, USA
Editors:
Emily M. Bender, Leon Derczynski, Pierre Isabelle
Venue:
COLING
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2595–2609
Language:
URL:
https://aclanthology.org/C18-1220
DOI:
Bibkey:
Cite (ACL):
Shudong Hao and Michael J. Paul. 2018. Learning Multilingual Topics from Incomparable Corpora. In Proceedings of the 27th International Conference on Computational Linguistics, pages 2595–2609, Santa Fe, New Mexico, USA. Association for Computational Linguistics.
Cite (Informal):
Learning Multilingual Topics from Incomparable Corpora (Hao & Paul, COLING 2018)
Copy Citation:
PDF:
https://aclanthology.org/C18-1220.pdf