Generating Summaries with Topic Templates and Structured Convolutional Decoders

Existing neural generation approaches create multi-sentence text as a single sequence. In this paper we propose a structured convolutional decoder that is guided by the content structure of target summaries. We compare our model with existing sequential decoders on three data sets representing different domains. Automatic and human evaluation demonstrate that our summaries have better content coverage.


Introduction
Abstractive multi-document summarization aims at generating a coherent summary from a cluster of thematically related documents.Recently, Liu et al. (2018) proposed generating the lead section of a Wikipedia article as a variant of multidocument summarization and released WikiSum, a large-scale summarization dataset which enables the training of neural models.
Like most previous work on neural text generation (Gardent et al., 2017;See et al., 2017;Wiseman et al., 2017;Puduppully et al., 2019;Celikyilmaz et al., 2018;Liu et al., 2018;Perez-Beltrachini and Lapata, 2018;Marcheggiani and Perez-Beltrachini, 2018), Liu et al. (2018) represent the target summaries as a single long sequence, despite the fact that documents are organized into topically coherent text segments, exhibiting a specific structure in terms of the content they discuss (Barzilay and Lee, 2004).This is especially the case when generating text within a specific domain where certain topics might be discussed in a specific order (Wray, 2002).For instance, the summary in Table 1 is about a species of damselfly; the second sentence describes the region where the species is found and the fourth the type of habitat the species lives in.We would expect other Animal Wikipedia summaries to exhibit similar content organization.
In this work we propose a neural model which is guided by the topic structure of target summaries, i.e., the way content is organized into sentences and the type of content these sentences discuss.Our model consists of a structured decoder which is trained to predict a sequence of sentence topics that should be discussed in the summary and to generate sentences based on these.We extend the convolutional decoder of Gehring et al. (2017) so as to be aware of which topics to mention in each sentence as well as their position in the target summary.We argue that a decoder which explicitly takes content structure into account could lead to better summaries and alleviate well-known issues with neural generation models being too general, too brief, or simply incorrect.
Although content structure has been largely unexplored within neural text generation, it has been been recognized as useful for summarization.Barzilay and Lee (2004) build a model of the content structure of source documents and target summaries and use it to extract salient facts from the source.Sauper and Barzilay (2009) cluster texts by target topic and use a global optimisation algorithm to select the best combination of facts from each cluster.Although these models have shown good results in terms of content selection, they cannot generate target summaries.Our model is also related to the hierarchical decoding approaches of Li et al. (2015) and Tan et al. (2017).However, the former approach is auto-encoding the same inputs (our model carries out content selection for the summarization task), while the latter generates independent sentences.They also both rely on recurrent neural models, while we use convolutional neural networks.To our knowledge this is the first hierarchical decoder proposed for a non-recurrent architecture.
To evaluate our model, we introduce WIKICAT-SUM, a dataset 1 derived from Liu et al. (2018) agriocnemis zerafica is a species of damselfly in the family coenagrionidae.it is native to africa, where it is widespread across the central and western nations of the continent.it is known by the common name sahel wisp.this species occurs in swamps and pools in dry regions.there are no major threats but it may be affected by pollution and habitat loss to agriculture and development.
agriocnemis zerafica EOT global distribution: the species is known from north-west uganda and sudan, through niger to mauritania and liberia: a larger sahelian range, i.e., in more arid zone than other african agriocnemis.record from angola unlikely.northeastern africa distribution: the species was listed by tsuda for sudan.[• • • ].EOP very small, about 20mm.orange tail.advised agriocnemis sp.id by kd dijkstra: [• • • ] EOP same creature as previously posted as unknown, very small, about 20mm, over water, top view.advised probably agriocnemis, "whisp" damselfly.EOP [• • • ] EOP justification: this is a widespread species with no known major widespread threats that is unlikely to be declining fast enough to qualify for listing in a threatened category.it is therefore assessed as least concern.EOP the species has been recorded from northwest uganda and sudan, through niger to mauritania and [• • • ] EOP the main threats to the species are habitat loss due to agriculture, urban development and drainage, as well as water pollution.which consists of Wikipedia abstracts and source documents and is representative of three domains, namely Companies, Films, and Animals.In addition to differences in vocabulary and range of topics, these domains differ in terms of the linguistic characteristics of the target summaries.We compare single sequence decoders and structured decoders using ROUGE and a suite of new metrics we propose in order to quantify the content adequacy of the generated summaries.We also show that structured decoding improves content coverage based on human judgments.

The Summarization Task
The Wikipedia lead section introduces the entity (e.g., Country or Brazil) the article is about, highlighting important facts associated with it.Liu et al. (2018) further assume that this lead section is a summary of multiple documents related to the entity.Based on this premise, they propose the multi-document summarization task of generating the lead section from the set of documents cited in Wikipedia articles or returned by Google (using article titles as queries).And create WikiSum, a large-scale multi-document summarization dataset with hundreds of thousands of instances.Liu et al. (2018) focus on summarization from very long sequences.Their model first selects a subset of salient passages by ranking all paragraphs from the set of input documents (based on their TF-IDF similarity with the title of the article).The L best ranked paragraphs (up to 7.5k tokens) are concatenated into a flat sequence and a decoder-only architecture (Vaswani et al., 2017) is used to generate the summary.
We explicitly model the topic structure of summaries, under the assumption that documents cover different topics about a given entity, while the summary covers the most salient ones and organizes them into a coherent multi-sentence text.We further assume that different lead summaries are appropriate for different entities (e.g.Animals github.com/lauhaide/WikiCatSum. vs. Films) and thus concentrate on specific domains.We associate Wikipedia articles with "domains" by querying the DBPedia knowledge-base.
A training instance in our setting is a (domainspecific) paragraph cluster (multi-document input) and the Wikipedia lead section (target summary).We derive sentence topic templates from summaries for Animals, Films, and Companies and exploit these to guide the summariser.However, there is nothing inherent in our model that restricts its application to different domains.

Generation with Content Guidance
Our model takes as input a set of ranked paragraphs where s t denotes the t-th sentence.
We adopt an encoder-decoder architecture which makes use of convolutional neural networks (CNNs; Gehring et al. 2017).CNNs permit parallel training (Gehring et al., 2017) and have shown good performance in abstractive summarization tasks (e.g., Narayan et al. 2018).Figure 1 illustrates the architecture of our model.We use the convolutional encoder of Gehring et al. (2017) to obtain a sequence of states A hierarchical convolutional decoder generates the target sentences (based on the encoder outputs).Specifically, a document-level decoder first generates sentence vectors (LSTM Document Decoder in Figure 1), representing the content specification for each sentence that the model plans to decode.A sentence-level decoder (CNN Sentence Decoder in Figure 1) is then applied to generate an actual sentence token-by-token.In the following we describe the two decoders in more detail and how they are combined to generate summaries.

Document-level Decoder
The document-level decoder builds a sequence of sentence representations (s 1 , • • • , s |S| ).For exam- ple, s 1 in Figure 1 is the vector representation for the sentence Aero is a firm.This layer uses an LSTM with attention.At each time step t, the LSTM will construct an output state s t , representing the content of the t-th sentence that the model plans to generate: where h t is the LSTM hidden state of step t and c s t is the context vector computed by attending to the input.The initial hidden state h 0 is initialized with the averaged sum of the encoder output states.We use a soft attention mechanism (Luong et al., 2015) to compute the context vector c s t : where α s jt is the attention weight for the document-level decoder attending to input token x j at time step t.

Sentence-level Decoder
Each sentence s t = (y t1 , . . ., y t|st| ) in target summary S is generated by a sentence-level decoder.The convolutional architecture proposed in Gehring et al. (2017) combines word embeddings with positional embeddings.That is, the word representation w ti of each target word y ti is combined with vector e i indicating where this word is in the sentence, w ti = emb(y ti ) + e i .We extend this representation by adding a sentence positional embedding.For each s t the decoder incorporates the representation of its position t.This explicitly informs the decoder which sentence in the target document to decode for.Thus, we redefine word representations as w ti = emb(y ti ) + e i + e t .

Hierarchical Convolutional Decoder
In contrast to recurrent networks where initial conditioning information is used to initialize the hidden state, in the convolutional decoder this information is introduced via an attention mechanism.In this paper we extend the multi-step attention (Gehring et al., 2017) with sentence vectors s t generated by the document-level decoder.
The output vectors for each layer l in the convolutional decoder, when generating tokens for the t-th sentence are2 : where o l ti is obtained by adding the corresponding sentence state s t produced by the document-level decoder (Equation ( 2)) and sentence-level context vector c l ti .c l ti is calculated by combining o l ti and s t with the previous target embedding g ti : The prediction of word y ti is conditioned on the output vectors of the top convolutional layer, as P (y ti |y t{1:i−1} ) = softmax(W y (o L ti + c L ti )).The model is trained to optimize negative log likelihood L N LL .

Topic Guidance
To further render the document-level decoder topic-aware, we annotate the sentences of groundtruth summaries with topic templates and force the model to predict these.To discover topic templates from summaries, we train a Latent Dirichlet Allocation model (LDA; Blei et al. (2003)), treating sentences as documents, to obtain sentencelevel topic distributions.Since the number of topics discussed in the summary is larger than the  number of topics discussed in a single sentence, we use a symmetric Dirichlet prior (i.e., we have no a-priori knowledge of the topics) with the concentration parameter set to favour sparsity in order to encourage the assignment of few topics to sentences.We use the learnt topic model consisting of K = {k 1 , • • • , k |K| } topics to annotate summary sentences with a topic vector.For each sentence, we assign a topic label from K corresponding to its most likely topic.Table 2 shows topics discovered by LDA and the annotated target sentences for the three domains we consider.We train the document-level decoder to predict the topic k t of sentence s t as an auxiliary task, P (k t |s 1:t−1 ) = softmax(W k (s t )), and optimize the summation of the L N LL loss and the negative log likelihood of P (k t |s 1:t−1 ).

Experimental setup
Data Our WIKICATSUM data set includes the first 800 tokens from the input sequence of paragraphs (Liu et al., 2018) and the Wikipedia lead sections.We included pairs with more than 5 source documents and with more than 23 tokens in the lead section (see Appendix A for details).Each dataset was split into train (90%), validation (5%) and test set (5%).Table 3 shows dataset statistics.
We compute recall ROUGE scores of the input documents against the summaries to asses the amount of overlap and as a reference for the interpretation of the scores achieved by the models.Across domains content overlap (R1) is ˜50 points.However, R2 is much lower indicating that there is abstraction, paraphrasing, and content selection in the summaries with respect to the input.We rank input paragraphs with a weighted TF-IDF similarity metric which takes paragraph length into account (Singhal et al., 2017).
The column TopicNb in Table 3 shows the number of topics in the topic models selected for each domain and Table 2 shows some of the topics (see Appendix A for training and selection details).The optimal number of topics differs for each domain.In addition to general topics which are discussed across domain instances (e.g., topic #0 in Animal), there are also more specialized ones, e.g., relating to a type of company (see topic #29 in Company) or species (see topic #1 in Animal).

Model Comparison
We compared against two baselines: the Transformer sequence-to-sequence model (TF-S2S) of Liu et al. (2018) and the Convolutional sequence-to-sequence model (CV-S2S) of Gehring et al. (2017).CV-S2D is our variant with a single sequence encoder and a structured decoder; and +T is the variant with topic label prediction.TF-S2S has 6 layers, the hidden size is set to 256 and the feed-forward hidden size was 1,024 for all layers.All convolutional models use the same encoder and decoder convolutional blocks.The encoder block uses 4 layers, 256 hidden dimensions and stride 3; the decoder uses the same configuration but 3 layers.All embedding sizes are set to 256.CV-S2D models are trained by first computing all sentence hidden states s t and then decoding all sentences of the summary in parallel.See Appendix A for models training details.
At test time, we use beam size of 5 for all models.The structured decoder explores at each sentence step 5 different hypotheses.Generation stops when the sentence decoder emits the End-Of-Document (EOD) token.The model trained to predict topic labels, will predict the End-Of-Topic label.This prediction is used as a hard constraint by the document-level decoder, setting the probability of the EOD token to 1.We also use trigram blocking (Paulus et al., 2018) to control for sentence repetition and discard consecutive sentence steps when these overlap on more than 80% of the tokens.

Results
Automatic Evaluation Our first evaluation is based on the standard ROUGE metric (Lin, 2004).
We also make use of two additional automatic metrics.They are based on unigram counts of content words and aim at quantifying how much the generated text and the reference overlap with respect to the input (Xu et al., 2016).We expect multi-document summaries to cover details (e.g., names and dates) from the input but also abstract and rephrase its content.Abstract Table 4 summarizes our results on the test set.In all datasets the structured decoder brings a large improvement in ROUGE-1 (R1), with the variant using topic labels (+T) bringing gains of +2 points on average.With respect to ROUGE-2 and -L (R2 and RL), the CV-S2D+T variant obtains highest scores on Company and Film, while on Animal it is close below to the baselines.Table 4 also presents results with our additional metrics which show that CV-S2D models have a higher overlap with the gold summaries on content words which do not appear in the input (A).All models have similar scores with respect to content words in the input and reference (C).
Human Evaluation We complemented the automatic evaluation with two human-based studies carried out on Amazon Mechanical Turk (AMT) over 45 randomly selected examples from the test set (15 from each domain).We compared the TS-S2S, CV-S2S and CV-S2D+T models.
The first study focused on assessing the extent to which generated summaries retain salient information from the input set of paragraphs.We fol- lowed a question-answering (QA) scheme as proposed in Clarke and Lapata (2010).Under this scheme, a set of questions are created based on the gold summary; participants are then asked to answer these questions by reading system summaries alone without access to the input.The more questions a system can answer, the better it is at summarizing the input paragraphs as a whole (see Appendix A for example questions).Correct answers are given a score of 1, partially correct answers score 0.5, and zero otherwise.The final score is the average of all question scores.We created between two and four factoid questions for each summary; a total of 40 questions for each domain.
We collected 3 judgements per system-question pair.Table 5 shows the QA scores.Summaries by the CV-S2D+T model are able to answer more questions, even for the Animals domain where the TS-S2S model obtained higher ROUGE scores.
The second study assessed the overall content and linguistic quality of the summaries.We asked judges to rank (lower rank is better) system outputs according to Content (does the summary appropriately captures the content of the reference?),Fluency (is the summary fluent and grammatical?),Succinctness (does the summary avoid repetition?).We collected 3 judgments for each of the 45 examples.Participants were presented with the gold summary and the output of the three systems in random order.Over all domains, the ranking of the CV-S2D+T model is better than the two single-sequence models TS-S2S and CONVS2S.

Conclusions
We introduced a novel structured decoder module for multi-document summarization.Our decoder is aware of which topics to mention in a sentence as well as of its position in the summary.Comparison of our model against competitive singlesequence decoders shows that structured decoding yields summaries with better content coverage.

A Appendix
A.1 Data WikiSum consist of Wikipedia articles each of which are associated with a set of reference documents. 3We associate Wikipedia articles (i.e., entities) with a set of categories by querying the DBPedia knowledge-base. 4The WikiSum dataset originally provides a set of URLs corresponding to the source reference documents; we crawled online for these references using the tools provided in Liu et al. (2018). 5 We used the Stanford CoreNLP (Manning et al., 2014) to tokenize the lead section into sentences.We observed that the Animal data set contains overall shorter sentences but also sentences consisting of long enumerations which is reflected in the higher variance in sentence length (see SentLen in Table 6).An example (lead) summary and related paragraphs in shown in Table 7.The upper part shows the target summary and the bottom the input set of paragraphs.EOP tokens separate the different paragraphs, EOT indicates the title of the Wikipedia article.
To discover sentence topic templates in summaries, we used the Gensim framework ( Řehůřek and Sojka, 2010) and learned LDA models on summaries of the train splits.We performed grid search on the number of topics [10, • • • , 90] every ten steps, and used the context-vector-based topic coherence metric (cf.(Röder et al., 2015)) as guidance to manually inspect the output topic sets and
(A) computes unigram f-measure between the reference and generated text excluding tokens from the input.Higher values indicate the model's abstraction capabilities.Copy (C) computes unigram fmeasure between the reference and generated text only on their intersection with the input.Higher values indicate better coverage of input details.

Table 1 :
Summary (top)and input paragraphs (bottom) from the Animal development dataset (EOP/T is a special token indicating the end of paragraph/title).

Table 2 :
Topics discovered for different domains and examples of sentence annotations.

Table 5 :
QA-based evaluation and system ranking.