Multi-source Neural Topic Modeling in Multi-view Embedding Spaces

Pankaj Gupta, Yatin Chaudhary, Hinrich Schütze


Abstract
Though word embeddings and topics are complementary representations, several past works have only used pretrained word embeddings in (neural) topic modeling to address data sparsity in short-text or small collection of documents. This work presents a novel neural topic modeling framework using multi-view embed ding spaces: (1) pretrained topic-embeddings, and (2) pretrained word-embeddings (context-insensitive from Glove and context-sensitive from BERT models) jointly from one or many sources to improve topic quality and better deal with polysemy. In doing so, we first build respective pools of pretrained topic (i.e., TopicPool) and word embeddings (i.e., WordPool). We then identify one or more relevant source domain(s) and transfer knowledge to guide meaningful learning in the sparse target domain. Within neural topic modeling, we quantify the quality of topics and document representations via generalization (perplexity), interpretability (topic coherence) and information retrieval (IR) using short-text, long-text, small and large document collections from news and medical domains. Introducing the multi-source multi-view embedding spaces, we have shown state-of-the-art neural topic modeling using 6 source (high-resource) and 5 target (low-resource) corpora.
Anthology ID:
2021.naacl-main.332
Volume:
Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
Month:
June
Year:
2021
Address:
Online
Editors:
Kristina Toutanova, Anna Rumshisky, Luke Zettlemoyer, Dilek Hakkani-Tur, Iz Beltagy, Steven Bethard, Ryan Cotterell, Tanmoy Chakraborty, Yichao Zhou
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4205–4217
Language:
URL:
https://aclanthology.org/2021.naacl-main.332
DOI:
10.18653/v1/2021.naacl-main.332
Bibkey:
Cite (ACL):
Pankaj Gupta, Yatin Chaudhary, and Hinrich Schütze. 2021. Multi-source Neural Topic Modeling in Multi-view Embedding Spaces. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4205–4217, Online. Association for Computational Linguistics.
Cite (Informal):
Multi-source Neural Topic Modeling in Multi-view Embedding Spaces (Gupta et al., NAACL 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.naacl-main.332.pdf
Video:
 https://aclanthology.org/2021.naacl-main.332.mp4
Code
 YatinChaudhary/Multi-view-Multi-source-Topic-Modeling