Deep Recurrent Generative Decoder for Abstractive Text Summarization

Piji Li, Wai Lam, Lidong Bing, Zihao Wang


Abstract
We propose a new framework for abstractive text summarization based on a sequence-to-sequence oriented encoder-decoder model equipped with a deep recurrent generative decoder (DRGN). Latent structure information implied in the target summaries is learned based on a recurrent latent random model for improving the summarization quality. Neural variational inference is employed to address the intractable posterior inference for the recurrent latent variables. Abstractive summaries are generated based on both the generative latent variables and the discriminative deterministic states. Extensive experiments on some benchmark datasets in different languages show that DRGN achieves improvements over the state-of-the-art methods.
Anthology ID:
D17-1222
Volume:
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
Month:
September
Year:
2017
Address:
Copenhagen, Denmark
Editors:
Martha Palmer, Rebecca Hwa, Sebastian Riedel
Venue:
EMNLP
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
2091–2100
Language:
URL:
https://aclanthology.org/D17-1222
DOI:
10.18653/v1/D17-1222
Bibkey:
Cite (ACL):
Piji Li, Wai Lam, Lidong Bing, and Zihao Wang. 2017. Deep Recurrent Generative Decoder for Abstractive Text Summarization. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2091–2100, Copenhagen, Denmark. Association for Computational Linguistics.
Cite (Informal):
Deep Recurrent Generative Decoder for Abstractive Text Summarization (Li et al., EMNLP 2017)
Copy Citation:
PDF:
https://aclanthology.org/D17-1222.pdf
Data
DUC 2004LCSTS