Adversarial Domain Adaptation Using Artificial Titles for Abstractive Title Generation

Francine Chen, Yan-Ying Chen


Abstract
A common issue in training a deep learning, abstractive summarization model is lack of a large set of training summaries. This paper examines techniques for adapting from a labeled source domain to an unlabeled target domain in the context of an encoder-decoder model for text generation. In addition to adversarial domain adaptation (ADA), we introduce the use of artificial titles and sequential training to capture the grammatical style of the unlabeled target domain. Evaluation on adapting to/from news articles and Stack Exchange posts indicates that the use of these techniques can boost performance for both unsupervised adaptation as well as fine-tuning with limited target data.
Anthology ID:
P19-1211
Volume:
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics
Month:
July
Year:
2019
Address:
Florence, Italy
Editors:
Anna Korhonen, David Traum, Lluís Màrquez
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2197–2203
Language:
URL:
https://aclanthology.org/P19-1211
DOI:
10.18653/v1/P19-1211
Bibkey:
Cite (ACL):
Francine Chen and Yan-Ying Chen. 2019. Adversarial Domain Adaptation Using Artificial Titles for Abstractive Title Generation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2197–2203, Florence, Italy. Association for Computational Linguistics.
Cite (Informal):
Adversarial Domain Adaptation Using Artificial Titles for Abstractive Title Generation (Chen & Chen, ACL 2019)
Copy Citation:
PDF:
https://aclanthology.org/P19-1211.pdf