Towards Improving Abstractive Summarization via Entailment Generation

Ramakanth Pasunuru, Han Guo, Mohit Bansal


Abstract
Abstractive summarization, the task of rewriting and compressing a document into a short summary, has achieved considerable success with neural sequence-to-sequence models. However, these models can still benefit from stronger natural language inference skills, since a correct summary is logically entailed by the input document, i.e., it should not contain any contradictory or unrelated information. We incorporate such knowledge into an abstractive summarization model via multi-task learning, where we share its decoder parameters with those of an entailment generation model. We achieve promising initial improvements based on multiple metrics and datasets (including a test-only setting). The domain mismatch between the entailment (captions) and summarization (news) datasets suggests that the model is learning some domain-agnostic inference skills.
Anthology ID:
W17-4504
Volume:
Proceedings of the Workshop on New Frontiers in Summarization
Month:
September
Year:
2017
Address:
Copenhagen, Denmark
Editors:
Lu Wang, Jackie Chi Kit Cheung, Giuseppe Carenini, Fei Liu
Venue:
WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
27–32
Language:
URL:
https://aclanthology.org/W17-4504
DOI:
10.18653/v1/W17-4504
Bibkey:
Cite (ACL):
Ramakanth Pasunuru, Han Guo, and Mohit Bansal. 2017. Towards Improving Abstractive Summarization via Entailment Generation. In Proceedings of the Workshop on New Frontiers in Summarization, pages 27–32, Copenhagen, Denmark. Association for Computational Linguistics.
Cite (Informal):
Towards Improving Abstractive Summarization via Entailment Generation (Pasunuru et al., 2017)
Copy Citation:
PDF:
https://aclanthology.org/W17-4504.pdf
Data
SNLI