Controlling Length in Abstractive Summarization Using a Convolutional Neural Network

Yizhu Liu, Zhiyi Luo, Kenny Zhu


Abstract
Convolutional neural networks (CNNs) have met great success in abstractive summarization, but they cannot effectively generate summaries of desired lengths. Because generated summaries are used in difference scenarios which may have space or length constraints, the ability to control the summary length in abstractive summarization is an important problem. In this paper, we propose an approach to constrain the summary length by extending a convolutional sequence to sequence model. The results show that this approach generates high-quality summaries with user defined length, and outperforms the baselines consistently in terms of ROUGE score, length variations and semantic similarity.
Anthology ID:
D18-1444
Volume:
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing
Month:
October-November
Year:
2018
Address:
Brussels, Belgium
Editors:
Ellen Riloff, David Chiang, Julia Hockenmaier, Jun’ichi Tsujii
Venue:
EMNLP
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
4110–4119
Language:
URL:
https://aclanthology.org/D18-1444
DOI:
10.18653/v1/D18-1444
Bibkey:
Cite (ACL):
Yizhu Liu, Zhiyi Luo, and Kenny Zhu. 2018. Controlling Length in Abstractive Summarization Using a Convolutional Neural Network. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4110–4119, Brussels, Belgium. Association for Computational Linguistics.
Cite (Informal):
Controlling Length in Abstractive Summarization Using a Convolutional Neural Network (Liu et al., EMNLP 2018)
Copy Citation:
PDF:
https://aclanthology.org/D18-1444.pdf
Code
 YizhuLiu/sumlen
Data
DMQA