Adversarial Domain Adaptation for Variational Neural Language Generation in Dialogue Systems

Van-Khanh Tran, Le-Minh Nguyen


Abstract
Domain Adaptation arises when we aim at learning from source domain a model that can perform acceptably well on a different target domain. It is especially crucial for Natural Language Generation (NLG) in Spoken Dialogue Systems when there are sufficient annotated data in the source domain, but there is a limited labeled data in the target domain. How to effectively utilize as much of existing abilities from source domains is a crucial issue in domain adaptation. In this paper, we propose an adversarial training procedure to train a Variational encoder-decoder based language generator via multiple adaptation steps. In this procedure, a model is first trained on a source domain data and then fine-tuned on a small set of target domain utterances under the guidance of two proposed critics. Experimental results show that the proposed method can effectively leverage the existing knowledge in the source domain to adapt to another related domain by using only a small amount of in-domain data.
Anthology ID:
C18-1103
Volume:
Proceedings of the 27th International Conference on Computational Linguistics
Month:
August
Year:
2018
Address:
Santa Fe, New Mexico, USA
Editors:
Emily M. Bender, Leon Derczynski, Pierre Isabelle
Venue:
COLING
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1205–1217
Language:
URL:
https://aclanthology.org/C18-1103
DOI:
Bibkey:
Cite (ACL):
Van-Khanh Tran and Le-Minh Nguyen. 2018. Adversarial Domain Adaptation for Variational Neural Language Generation in Dialogue Systems. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1205–1217, Santa Fe, New Mexico, USA. Association for Computational Linguistics.
Cite (Informal):
Adversarial Domain Adaptation for Variational Neural Language Generation in Dialogue Systems (Tran & Nguyen, COLING 2018)
Copy Citation:
PDF:
https://aclanthology.org/C18-1103.pdf