Simplified Neural Unsupervised Domain Adaptation

Timothy Miller


Abstract
Unsupervised domain adaptation (UDA) is the task of training a statistical model on labeled data from a source domain to achieve better performance on data from a target domain, with access to only unlabeled data in the target domain. Existing state-of-the-art UDA approaches use neural networks to learn representations that are trained to predict the values of subset of important features called “pivot features” on combined data from the source and target domains. In this work, we show that it is possible to improve on existing neural domain adaptation algorithms by 1) jointly training the representation learner with the task learner; and 2) removing the need for heuristically-selected “pivot features.” Our results show competitive performance with a simpler model.
Anthology ID:
N19-1039
Volume:
Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)
Month:
June
Year:
2019
Address:
Minneapolis, Minnesota
Editors:
Jill Burstein, Christy Doran, Thamar Solorio
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
414–419
Language:
URL:
https://aclanthology.org/N19-1039
DOI:
10.18653/v1/N19-1039
Bibkey:
Cite (ACL):
Timothy Miller. 2019. Simplified Neural Unsupervised Domain Adaptation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 414–419, Minneapolis, Minnesota. Association for Computational Linguistics.
Cite (Informal):
Simplified Neural Unsupervised Domain Adaptation (Miller, NAACL 2019)
Copy Citation:
PDF:
https://aclanthology.org/N19-1039.pdf