Weighed Domain-Invariant Representation Learning for Cross-domain Sentiment Analysis

Minlong Peng, Qi Zhang


Abstract
Cross-domain sentiment analysis is currently a hot topic in both the research and industrial areas. One of the most popular framework for the task is domain-invariant representation learning (DIRL), which aims to learn a distribution-invariant feature representation across domains. However, in this work, we find out that applying DIRL may degrade domain adaptation performance when the label distribution P(Y) changes across domains. To address this problem, we propose a modification to DIRL, obtaining a novel weighted domain-invariant representation learning (WDIRL) framework. We show that it is easy to transfer existing models of the DIRL framework to the WDIRL framework. Empirical studies on extensive cross-domain sentiment analysis tasks verified our statements and showed the effectiveness of our proposed solution.
Anthology ID:
2020.coling-main.22
Volume:
Proceedings of the 28th International Conference on Computational Linguistics
Month:
December
Year:
2020
Address:
Barcelona, Spain (Online)
Editors:
Donia Scott, Nuria Bel, Chengqing Zong
Venue:
COLING
SIG:
Publisher:
International Committee on Computational Linguistics
Note:
Pages:
251–265
Language:
URL:
https://aclanthology.org/2020.coling-main.22
DOI:
10.18653/v1/2020.coling-main.22
Bibkey:
Cite (ACL):
Minlong Peng and Qi Zhang. 2020. Weighed Domain-Invariant Representation Learning for Cross-domain Sentiment Analysis. In Proceedings of the 28th International Conference on Computational Linguistics, pages 251–265, Barcelona, Spain (Online). International Committee on Computational Linguistics.
Cite (Informal):
Weighed Domain-Invariant Representation Learning for Cross-domain Sentiment Analysis (Peng & Zhang, COLING 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.coling-main.22.pdf