Coarse-to-Fine Attention Models for Document Summarization

Jeffrey Ling, Alexander Rush


Abstract
Sequence-to-sequence models with attention have been successful for a variety of NLP problems, but their speed does not scale well for tasks with long source sequences such as document summarization. We propose a novel coarse-to-fine attention model that hierarchically reads a document, using coarse attention to select top-level chunks of text and fine attention to read the words of the chosen chunks. While the computation for training standard attention models scales linearly with source sequence length, our method scales with the number of top-level chunks and can handle much longer sequences. Empirically, we find that while coarse-to-fine attention models lag behind state-of-the-art baselines, our method achieves the desired behavior of sparsely attending to subsets of the document for generation.
Anthology ID:
W17-4505
Volume:
Proceedings of the Workshop on New Frontiers in Summarization
Month:
September
Year:
2017
Address:
Copenhagen, Denmark
Editors:
Lu Wang, Jackie Chi Kit Cheung, Giuseppe Carenini, Fei Liu
Venue:
WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
33–42
Language:
URL:
https://aclanthology.org/W17-4505
DOI:
10.18653/v1/W17-4505
Bibkey:
Cite (ACL):
Jeffrey Ling and Alexander Rush. 2017. Coarse-to-Fine Attention Models for Document Summarization. In Proceedings of the Workshop on New Frontiers in Summarization, pages 33–42, Copenhagen, Denmark. Association for Computational Linguistics.
Cite (Informal):
Coarse-to-Fine Attention Models for Document Summarization (Ling & Rush, 2017)
Copy Citation:
PDF:
https://aclanthology.org/W17-4505.pdf
Data
CNN/Daily Mail