Generating Diverse Story Continuations with Controllable Semantics

We propose a simple and effective modeling framework for controlled generation of multiple, diverse outputs. We focus on the setting of generating the next sentence of a story given its context. As controllable dimensions, we consider several sentence attributes, including sentiment, length, predicates, frames, and automatically-induced clusters. Our empirical results demonstrate: (1) our framework is accurate in terms of generating outputs that match the target control values; (2) our model yields increased maximum metric scores compared to standard n-best list generation via beam search; (3) controlling generation with semantic frames leads to a stronger combination of diversity and quality than other control variables as measured by automatic metrics. We also conduct a human evaluation to assess the utility of providing multiple suggestions for creative writing, demonstrating promising results for the potential of controllable, diverse generation in a collaborative writing system.


Introduction
We consider the problem of automatic story continuation generation, i.e., how to generate story continuations conditioned on the story context. Inspired by recent work in controllable generation (Hu et al., 2017;Ficler and Goldberg, 2017), we propose a simple and effective modeling framework for controlled generation of multiple, diverse outputs based on interpretable control variables. Each control variable corresponds to an attribute of a sentence. Compared to previous work that only seeks to control the values of sentiment (Hu et al., 2017) and length (Kikuchi et al., 2016), we further explore neural text generation with particular verbal predicates, semantic frames, and automatically-induced clusters.
We compare the diversity of story continuations controlled by different sentence attributes and find Context: sandra needed a new phone . her old one had broken . she walked into the store and bought a new one . she was very excited . Control Attributes and Generated Continuations: Sentiment: positive sandra was happy to have a new phone . Predicates: loved sandra loved the new one . gave sandra 's mom gave sandra a refund . Frames: Calendric unit it was the perfect day of her life . Cardinal numbers it was a perfect one . Activity ongoing she kept it on the couch . that using frames yields a stronger combination of diversity and quality than other control variables as measured by automatic metrics. Unlike certain other attributes, frames have hundreds of possible values. Some frame values can help to get a natural continuation, while others are not applicable when considering the story context. We quantitatively evaluate both controllability and diversity. Our empirical results show that: (1) our framework is accurate in terms of generating outputs that match the target attribute values; (2) our framework increases maximum metrics scores compared to n-best list generation with beam search; (3) controlling generation with frames leads to a stronger combination of diversity and quality than other control variables as measured by automatic metrics.
We also explore methods to rerank continuations to choose attribute values automatically and obtain a small number of high-quality outputs. We consider two frame ranking methods: one reranks the generated continuations using a reverse scoring model  and returns the k-best generations; the second first predicts the k most likely frames based on the context, and uses these frames to generate continuations. One potential use case of controllable, diverse story generation is collaborative writing applications (Clark et al., 2018b). We conduct a human evaluation to assess the utility of providing multiple suggestions from our models in this setting, demonstrating promising results for the potential of controllable generation for collaborative writing.

Task Description and Definitions
Given a story context and a control attribute value, our goal is to generate a story continuation that: (1) conforms to the given attribute value, (2) is relevant to the story context, and (3) is complementary to continuations with other control attribute values, thereby providing a diverse set of continuations when used with multiple attribute values.
We use x = x 1 , x 2 , ..., x |x| to denote a story context and y = y 1 , y 2 , ..., y |y| to denote a story continuation. The last token y |y| is assumed to be eos . We develop a framework to model tuples (x, l, y), where l is a control attribute. The control attribute represents a characteristic of the continuation, such as its length, sentiment, automaticallyinduced cluster, verbal predicate, or set of semantic frames. Table 1 shows several examples of control attributes and generated continuations corresponding to them from our model.

Model
Our controllable generation framework is a variation of a sequence-to-sequence (seq2seq) model with attention (Sutskever et al., 2014;Bahdanau et al., 2015;Luong et al., 2015). To represent the control attribute values, we define an attribute embedding function R that maps a given attribute value l to a vector z: z = R(l). Here l can be a single discrete number or a set of discrete numbers, depending on what attributes are being used. The control variable z contains two parts: z = [z enc ; z dec ] where semicolon (;) denotes vertical concatenation and z enc and z dec are additional inputs for the encoder and decoder respectively.
Encoder. For our encoder, we use a standard bidirectional recurrent neural network (RNN): where v i is the vector representation of word x i , s i ∈ R d is the hidden state at time i, and f e1 and f e2 are the forward and backward RNN functions.
Decoder. Our decoder uses an RNN with the general global attention scheme from Luong et al. (2015). An additional input z dec is fed to the decoder at each time step to reflect the characteristics of the control variable: where h j is the decoder RNN hidden vector at time step j and f d is the decoder RNN function. Then, the conditional probability with controllable generation can be decomposed as follows: Here s represents the hidden states of the source sequence and Θ are the parameters of the seq2seq attention model.
Training. Our training objective is: where D is the training data, i.e., we assume attribute values l are provided during training. In practice, these will be predicted automatically using a linguistic analyzer. With certain attributes, we do not update the attribute value embeddings R, i.e., we fix z enc and z dec to one-hot vectors.
Inference. We can specify the value of the control variable and generate specific outputs. By changing the variable values, we obtain multiple continuations for the same context. Beam search is used for decoding.

Control Attributes
In this section, we describe the control attributes we explored using our framework. Table 2 shows examples of generated continuations for a single story context with several values for the control attributes described below. Given our simple modeling framework, it would be natural to experiment with combining control attributes via summation or averaging of the attribute representations, but we leave an investigation of this to future work, focusing here on using one attribute at a time.
Context: i bought a pair of earbuds at target . i spent ten dollars . someone told me they were cheaper at the dollar store . they were only a dollar . Gold Continuation: i wish i had known that before . Sentiment negative i was disappointed . neutral i decided to keep them . positive i was able to get a new pair . Length 4 i was shocked . 5 i was rather disappointed . 6 i bought a new pair . 7 i was able to buy them . 8 i was glad i bought them . 9 i was able to buy a new pair . 10 i was able to buy a new pair . 11 i was able to buy a new pair of shoes . 12 i was able to buy a new pair of new ones . 13 i was able to buy a new pair of rubber flavored items . 14 i was able to buy a new pair and i was very happy . Verbal Predicates wish, known i wish i will always recognize them .  Sentiment. Stories may express sentiment regarding their characters or circumstances. We acquire sentiment labels by running the pretrained analyzer from Socher et al. (2013) on the continuations in the training data. The analyzer produces three labels: "negative", "neutral", or "positive". During training, z enc and z dec are fixed one-hot vectors for each value.
Length. Some prior work has generated summaries with a desired length (Kikuchi et al., 2016;Fan et al., 2018a). We similarly use length of the continuation as a control attribute. Instead of using an embedding for each integer length value, we group the lengths into a small number of bins (details are provided below). z enc and z dec are fixed one-hot vectors for each bin.
Verbal Predicates. Semantic role labeling (SRL) is a form of shallow semantic parsing that annotates predicates and their arguments in sentences. We consider predicates from a semantic role labeling as control attributes. We use the SRL system from AllenNLP (Gardner et al., 2018) to automatically obtain predicates for the continuations in our training set. Then, a predicate vector is obtained by first summing up 100-dimensional GloVe embeddings (Pennington et al., 2014) of the predicted predicates (if there is more than one), then reducing the dimension to 64 using principal component analysis. 1 We wish to clarify that we do not use the argument structure from the SRL system. We restrict our focus to simply the set of verbal predicates in the SRL structure; this would presumably be simpler to use in interactive settings where users would specify attribute values for generating continuations.
Frame Semantics. A story is composed of a sequence of meaningful events (Chatman, 1980), often following particular patterns described in various terms such as scripts (Schank and Abelson, 1977) and frames. FrameNet (Baker et al., 1998) is an inventory of semantic frames, which are semantic abstractions describing universal categories of events, concepts, and relationships. We consider frame semantics as another control attribute in our framework. In order to get a frame semantic representation for a continuation, we use SEMAFOR (Das et al., 2014). SEMAFOR automatically produces a frame-semantic parse for a sentence, which consists of spans that evoke particular frames in FrameNet as well as annotations of textual spans that correspond to frame-specific arguments. For our purposes, we drop the arguments and only use the set containing all frames that are evoked in the sentence. A sentence may contain multiple frames. For example, in the sentence "Roa's advice made Emma a lot happier in her life!", "a lot" evokes the Quantity frame while "Emma a lot happier" evokes the Effect frame.
The frame set variable z is computed by summing embeddings for the frames in the set: where l is the frame set and R j is the representation of frame j. The frame embeddings are learned during training. 2 For modeling purposes, we restrict our attention to the 100 most frequent frames in the training data. The rest of the frames are pooled together to form a single additional "catchall" frame.
Automatically-Induced Clusters. We also experiment with running k-means clustering on the bag-of-words sentence representations of the continuations in the training set. We treat these automatically-induced cluster labels as control attribute values. Below we describe experiments with different cluster labels and analyze the characteristics of the generated outputs.
Oracle Bag-of-Words Sentence Representations. We also consider the use of a bag-ofwords (BOW) sentence representation as a control attribute. Naturally, the sentence representation of the continuation is not available before generating the continuation in practice. However, we can use this attribute to verify the capability of our model to reconstruct the continuation from its bagof-words representation.

Datasets
We experiment with the publicly available ROC story corpus developed by Mostafazadeh et al. (2016). It consists of approximately 100K fivesentence stories of everyday events. We sample 2000 stories as a development set and 2000 as our test set. The remaining stories form our training set. Our goal is to generate the fifth sentence (the "continuation") given the previous four sentences. We use the 10k most frequent words in the training set as our vocabulary. A special token unk is introduced for unknown words.

Evaluation
Previous work evaluates generation tasks with automatic metrics, such as perplexity (PPL), BLEU (Papineni et al., 2002), 3 and ROUGE (Lin, 2004). We adopt these in our evaluation and add three more metrics using the pretrained story scorer from Sagarkar et al. (2018). The scorer rates a generated continuation given its context along three dimensions: relevance (R), interestingness (I), and overall quality (O). The story scorer does not use a gold standard continuation.
In addition, to evaluate the diversity of the generation, we use Max-BLEU 4 and Max-ROUGE. First, we compute BLEU and ROUGE scores over a set of outputs (y 1 , y 2 , ..., y n ) with different attribute values given the same story context, then we compute the max scores: where r is the gold standard continuation.
We also use Self-BLEU (Zhu et al., 2018) to evaluate the diversity of a set of outputs. It is calculated by averaging the BLEU scores computed between all pairs of generated continuations for a given context, then averaging this quantity over all contexts. The smaller the Self-BLEU score is, the more diverse are the generated outputs.

Training Details
Our seq2seq model has a 2-layer biL-STM (Hochreiter and Schmidhuber, 1997) encoder and a 1-layer LSTM decoder. The hidden dimension of all layers is 512. The word embedding dimension is also 512. For optimization, we use Adam (Kingma and Ba, 2014) with learning rate 0.001. We use early stopping based on perplexity on the development set.   Table 4: Frequency (%) of the generated continuations in the range of dif = |l − l p | where l is the continuation length and l p is the desired length.

Results
We now present our experimental results. Section 6.1 includes results related to how well our generated output matches the desired attribute values. Section 6.2 presents results when generating continuations with oracle attribute values. In Section 6.3 we use our set-level metrics to evaluate sets of outputs with various attribute values. In Section 6.4 we report results when attempting to automatically infer attribute values to generate a small set of high-quality outputs.

Controllability Evaluation
In this section, we evaluate the controllability accuracy of our framework by automatically measuring the match between the attribute values of the generated continuations and the desired values. For certain control variables, like sentiment and frames, this automatic evaluation is prone to errors in the associated analyzers. That is, the metrics that rely on automatic analyzers could become artificially high if our generation models learn to produce outputs that match the biases of the analyzers. We could instead consider manual evaluation of control accuracy. However, we were more interested in devoting our manual evaluation to the question of whether users would find the system outputs useful for a particular goal.
Sentiment. We generate three continuations for each story in the development set, one for each sentiment label. Using the same sentiment analyzer from Socher et al. (2013) as above, we obtain predicted sentiment labels for the continuations. Table 3 shows the sentiment distribution for each label. We see that the vast majority of the time, the continuations match the desired val-  ues. Matching positive sentiment is easiest for our model, followed by neutral.
Length. We quantize the generation lengths into bins, each representing a size range. Below are the two settings we consider: • 3 bins: We use three bins with the following length ranges: [1,7], [8,13], and [14, ∞).
• 30 bins: We use a bin for each length. No sentence is longer than 30.
During training, we do not update the representations of the length control variable. After training, we treat the length of the continuation in the development set as the target control variable and generate continuations for each length. The results are shown in Table 4 and demonstrate that our model can generate continuations with the desired length with only small differences.
Verbal Predicates. We select the top 100 most frequent verbal predicates in the training data. Then for all the stories in the development set, we generate a continuation for each of the 100 predicates. We check whether the predicate appears in the generated continuations. As the results in Table 5 show, the framework can nearly always generate outputs with the desired predicates.
Frame Semantics. In order to check how frequently the generated output matches the desired frames, we generate continuations for the top 100 frames (one frame for each continuation) for all stories in the development set. We check whether the frame appears in the specific continuation using SEMAFOR. The results are shown in Table 6. Most frames have very high match accuracies, but there are a few frames with much   lower accuracy, such as "Food" and "Observable body parts". These are more concrete frames that may be difficult to reasonably incorporate in certain story contexts.
Automatically-Induced Clusters. Given the cluster, the model generates a continuation. Then, we represent the continuation as a bag-of-words sentence embedding (using the same method as when performing the initial clustering) and find the cluster that has the nearest cluster embedding. Then we check whether the two clusters match. In analyzing the clusters, we observed that cluster 0 corresponds to simple but reasonable continuations. Cluster 2 corresponds to continuations with positive sentiment. Cluster 4 contains continuations with more actions. Some of the generated outputs are shown in Table 2. From the results in Table 7, we still see controllability for most clusters; however, for target cluster 3, which is rather generic based on our observations, the generated output seems flat. Table 8 shows automatic metric scores with oracle attribute values, i.e., using the attribute values of the gold standard continuations. Unsurprisingly, compared with the seq2seq baseline, the perplexity decreases and the ROUGE and BLEU scores increase with oracle attributes.We also find that the scores from the story scorer, which does not use the gold standard while scoring, also show improvements over the baseline. We note that frame semantics form one of the most useful control attributes, aside from those that use words directly.

Evaluation with Oracle Attributes
The oracle BOW representation of the gold standard continuation yields the lowest perplexity and highest ROUGE and BLEU scores. It is not surprising that using this attribute would be useful according to metrics that favor matching the gold standard. However, these results do show that our simple modeling framework can make use of the information in the control variable with a high degree of effectiveness. In addition, while the scores from the story scorer are generally higher than for other control attributes, they are roughly on par with those when using predicates and frames.

Evaluating Sets of Continuations
We now evaluate sets of continuations using our set-level metrics. Standard methods to generate sets of outputs include beam search (BS) and temperature-based sampling (TS), which we use as baselines. TS with temperature τ corresponds to transforming probabilities p i as follows:p i ∝ p 1 τ i . A high temperature τ leads to high variety in generated samples, but also more noise, while lower temperatures lead to samples with less noise but also less diversity.
For each attribute, we generate continuations for each of its values, and compare to BS and TS systems with the same number of outputs. For example, for sentiment, we generate continuations for each of the 3 sentiment values and compare to BS and TS with 3 continuations.
Results are shown in Table 9. BS shows the least diversity (as evidenced by its high self-BLEU scores). However, it generally yields high average ROUGE and BLEU scores. TS does very well in terms of diversity, and this diversity enables it to produce higher max scores than BS, but it has lower averages when using small numbers of continuations (3 or 5).
Our sentiment-and cluster-controlled systems outperform TS in max metric scores and BS in diversity (self-BLEU). They also have the highest average BLEU scores, though the differences are small. With 30 continuations, TS with τ = 0.5 performs best across all metrics; this number of continuations appears to be well-suited for temperature 0.5. As we move to 100 continuations, we   Table 9: Metric scores to evaluate the potential of a list of continuations. We report the maximum and average metric scores over the continuations in each list to evaluate the quality of the lists, and self-BLEU to evaluate diversity. Best results for each metric and each number of outputs are in bold.
find that using our frame control variable leads to better diversity than TS, suggesting that the move to 100 samples has introduced some amount of repetition. By contrast, the 100 distinct frames and frame sets yield better diversity.

Automatically Choosing Attribute Values
Using our framework, we can generate continuations with any attribute values. However, if we are interested in generating a single continuation, we do not know the ideal attribute values to use. So, we propose two methods to automatically select a small set of values for the frame attribute.
Frames + Reranking: Following , we rerank the outputs from the 100 most frequent frame sets by linearly combining the forward score p(y | x) and the "reverse score" λp(x | y), where the latter comes from a separatelytrained seq2seq model. The forward score p(y | x) is adjusted by dividing by the length of y in order to not favor shorter outputs.
Predicted Frames: We also build a model to automatically predict the frames in the continuation. Given the frames in a sentence x, we compute a binary frame vector f x where entry j is 1 if frame j appears in x. We train a model that predicts the frame vector of the continuation given the frame vectors of the previous 4 sentences. The model is an LSTM followed by averaging of hidden states. Mean squared error is minimized during training. After training, the k continuations are selected based on the k frames with the highest predicted score under this frame prediction model.
We use these two methods to produce 3 continuations for each story and report results in Table 9. They both achieve a similar balance of quality and diversity as TS with τ = 0.6, with reranking leading to greater diversity than frame prediction and the latter showing higher ROUGE/BLEU scores.

Human Evaluation
Our previous results demonstrate that our frame control system has strong controllability and diversity in generation. In this section, we conduct a human evaluation to assess the utility of providing multiple suggestions from our models in a creative writing setting. We consider four different systems: BS with beam size 3; TS with 3 continuations using τ = 0.6, which we found to produce outputs with more diversity than 0.5; reranking the 100 most frequent frame sets and using the top 3; and using continuations from the top-3 predicted frames under our frame prediction model. 5 To assess which set of generations from these four systems are most helpful in a collaborative writing setting, we collect annotations using Amazon Mechanical Turk. We randomly select 100 stories. For each story, we generate three outputs as a set of suggestions for each system, so there are 600 comparision pairs in total. We show workers two sets of outputs from different systems and ask them to select which suggestion is more helpful for writing the next line in the story. We also provide a choice of "neither one is helpful at all". We ask them explicitly to imagine they are writing the next line of the given story (see the appendix for more details). Table 10 shows the results. 6 We observe that workers prefer the BS baseline over TS, although TS yields higher diversity. This could be because the continuations from BS are shorter, simpler, and more fluent. In addition, we observe that workers prefer the outputs from the reranking system over BS more often than not. Although the predicted frame system yields more diverse outputs, workers still prefer BS, likely due to the difficulty in predicting frames. The reranking and predicted frame systems are both preferred to TS, though the gap is smaller with the predicted system. We also see that generating helpful suggestions is a difficult task: in many cases workers thought neither system was helpful, especially when given the outputs from BS/TS or TS/predicted.
One may ask why workers do not show a stronger preference for the more diverse sets of  outputs. From our own preliminary annotations, we believe this is because diverse outputs tend to be longer and harder to understand, and also because greater diversity increases the chance of producing disfluent or nonsensical outputs. The BS outputs, by comparison, are sensical and mostly on-topic. Even if the suggestions are not creative, they may still help a worker to think about a new direction for the story to take. Nonsensical or disfluent suggestions, however, are rarely helpful.
In stronger relevance to our work, Clark et al. (2018b) explore a creative writing setting with a machine in the loop, albeit with mixed results in terms of the quality of system suggestions. Predicting and controlling with frame values suggests a new way of interacting with collaborative writing systems, as long as frames can be communicated to users in ways they can easily understand. Recently, Clark et al. (2018a) proposed a neural text generation method that explicitly represents and tracks entities. In addition, event sequences (Chaturvedi et al., 2017; are important elements in narrative texts but under-explored for story generation.
These and related characteristics of creative writing could be incorporated into our framework as control attributes in future work.
The broader neural text generation community has also recently been interested in controllable text generation, i.e., generating text with specified characteristics reflected by control variables. In some previous work, the variables to be controlled are embedded into vectors which are then fed into models to reflect the characteristics of the variables. Kikuchi et al. (2016) and Fan et al. (2018a) developed methods for controllable summarization, for example permitting users to control the length of the generated summary. Related work has controlled style, topic, and sentiment polarity (Hu et al., 2017;Wang et al., 2017;Shen et al., 2017;Yang et al., 2018).
Despite the widespread usage of beam search for neural text generation, it has long been observed that its outputs are lacking in diversity. Several efforts have been made to provide diverse outputs for generation tasks, such as dialogue  and machine translation (Devlin and Matsoukas, 2012;Gimpel et al., 2013;Li and Jurafsky, 2016). Diverse beam search (Vijayakumar et al., 2018) produces a list of outputs with a diversity-augmented objective. Ippolito et al. (2019) compare several methods for producing a set of diverse outputs from conditional language models. We leave a careful comparison to such algorithms to future work.

Conclusion and Future Work
We proposed a controllable framework that generates the next sentence of a story given its context. We experimented with a broad range of control attributes and demonstrated that our framework can accurately generate outputs that match the target values. Sets of outputs from our method show high diversity and high oracle metric scores. The human evaluation shows that the multiple suggestions from our model show promise for integration in a collaborative writing system. Future work could explore other control attributes as well as a compositional framework to control multiple attributes jointly.  Here l is the story continuation length and l p is generated sentence length as the target control variable. "==" refers to l = l p , "<= i" refers to |l − l p | <= i.   58 Figure 4: The form for human evaluation of the generation systems for their potential in a collaborative writing setting.