Language as a Latent Variable: Discrete Generative Models for Sentence Compression

In this work we explore deep generative models of text in which the latent representation of a document is itself drawn from a discrete language model distribution. We formulate a variational auto-encoder for inference in this model and apply it to the task of compressing sentences. In this application the generative model first draws a latent summary sentence from a background language model, and then subsequently draws the observed sentence conditioned on this latent summary. In our empirical evaluation we show that generative formulations of both abstractive and extractive compression yield state-of-the-art results when trained on a large amount of supervised data. Further, we explore semi-supervised compression scenarios where we show that it is possible to achieve performance competitive with previously proposed supervised models while training on a fraction of the supervised data.


Introduction
The recurrent sequence-to-sequence paradigm for natural language generation (Kalchbrenner and Blunsom, 2013;Sutskever et al., 2014) has achieved remarkable recent success and is now the approach of choice for applications such as machine translation (Bahdanau et al., 2015), caption generation (Xu et al., 2015) and speech recognition (Chorowski et al., 2015). While these models have developed sophisticated conditioning mechanisms, e.g. attention, fundamentally they are discriminative models trained only to approximate the conditional output distribution of strings. In this paper we explore modelling the joint distribution of string pairs using a deep generative model and employing a discrete variational autoencoder (VAE) for inference (Kingma and Welling, 2014;Rezende et al., 2014;Mnih and Gregor, 2014). We evaluate our generative approach on the task of sentence compression. This approach provides both alternative supervised objective functions and the opportunity to perform semi-supervised learning by exploiting the VAEs ability to marginalise the latent compressed text for unlabelled data. Auto-encoders (Rumelhart et al., 1985) are a typical neural network architecture for learning compact data representations, with the general aim of performing dimensionality reduction on embeddings (Hinton and Salakhutdinov, 2006). In this paper, rather than seeking to embed inputs as points in a vector space, we describe them with explicit natural language sentences. This approach is a natural fit for summarisation tasks such as sentence compression. According to this, we propose a generative auto-encoding sentence compression (ASC) model, where we introduce a latent language model to provide the variablelength compact summary. The objective is to perform Bayesian inference for the posterior distribution of summaries conditioned on the observed utterances. Hence, in the framework of VAE, we construct an inference network as the variational approximation of the posterior, which generates compression samples to optimise the variational lower bound.
The most common family of variational autoencoders relies on the reparameterisation trick, which is not applicable for our discrete latent language model. Instead, we employ the REINFORCE algorithm Mnih and Gregor, 2014)  to mitigate the problem of high variance during sampling-based variational inference. Nevertheless, when directly applying the RNN encoder-decoder to model the variational distribution it is very difficult to generate reasonable compression samples in the early stages of training, since each hidden state of the sequence would have |V | possible words to be sampled from. To combat this we employ pointer networks  to construct the variational distribution. This biases the latent space to sequences composed of words only appearing in the source sentence (i.e. the size of softmax output for each state becomes the length of current source sentence), which amounts to applying an extractive compression model for the variational approximation.
In order to further boost the performance on sentence compression, we employ a supervised forcedattention sentence compression model (FSC) trained on labelled data to teach the ASC model to generate compression sentences. The FSC model shares the pointer network of the ASC model and combines a softmax output layer over the whole vocabulary. Therefore, while training on the sentencecompression pairs, it is able to balance copying a word from the source sentence with generating it from the background distribution. More importantly, by jointly training on the labelled and unlabelled datasets, this shared pointer network enables the model to work in a semi-supervised scenario. In this case, the FSC teaches the ASC to generate reasonable samples, while the pointer network trained on a large unlabelled data set helps the FSC model to perform better abstractive summarisation.
In Section 6, we evaluate the proposed model by jointly training the generative (ASC) and discriminative (FSC) models on the standard Gigaword sentence compression task with varying amounts of labelled and unlabelled data. The results demonstrate that by introducing a latent language variable we are able to match the previous benchmakers with small amount of the supervised data. When we employ our mixed discriminative and generative objective with all of the supervised data the model significantly outperforms all previously published results.

Auto-Encoding Sentence Compression
In this section, we introduce the auto-encoding sentence compression model ( Figure 1) 1 in the framework of variational auto-encoders. The ASC model consists of four recurrent neural networks -an encoder, a compressor, a decoder and a language model. Let s be the source sentence, and c be the compression sentence. The compression model (encodercompressor) is the inference network q φ (c|s) that takes source sentences s as inputs and generates extractive compressions c. The reconstruction model (compressor-decoder) is the generative network p θ (s|c) that reconstructs source sentences s based on the latent compressions c. Hence, the forward pass starts from the encoder to the compressor and ends at the decoder. As the prior distribution, a language model p(c) is pre-trained to regularise the latent compressions so that the samples drawn from the compression model are likely to be reasonable natural language sentences.

Compression
For the compression model (encoder-compressor), q φ (c|s), we employ a pointer network consisting of a bidirectional LSTM encoder that processes the source sentences, and an LSTM compressor that generates compressed sentences by attending to the encoded source words.
Let s i be the words in the source sentences, h e i be the corresponding state outputs of the encoder. h e i are the concatenated hidden states from each direction: Further, let c j be the words in the compressed sentences, h c j be the state outputs of the compressor. We construct the predictive distribution by attending to the words in the source sentences: where c 0 is the start symbol for each compressed sentence and h c 0 is initialised by the source sentence vector of h e |s| . In this case, all the words c j sampled from q φ (c j |c 1:j−1 , s) are the subset of the words appeared in the source sentence (i.e. c j ∈ s).

Reconstruction
For the reconstruction model (compressor-decoder) p θ (s|c), we apply a soft attention sequence-tosequence model to generate the source sentence s based on the compression samples c ∼ q φ (c|s).
Let s k be the words in the reconstructed sentences and h d k be the corresponding state outputs of the decoder: In this model, we directly use the recurrent cell of the compressor to encode the compression samples 2 : where the state outputsĥ c j corresponding to the word inputs c j are different from the outputs h c j in the compression model, since we block the information from the source sentences. We also introduce a start symbol s 0 for the reconstructed sentence and h d 0 is initialised by the last state outputĥ c |c| . The soft attention model is defined as: We then construct the predictive probability distribution over reconstructed words using a softmax:

Inference
In the ASC model there are two sets of parameters, φ and θ, that need to be updated during inference. Due to the non-differentiability of the model, the reparameterisation trick of the VAE is not applicable in this case. Thus, we use the REINFORCE algorithm Mnih and Gregor, 2014) to reduce the variance of the gradient estimator. The variational lower bound of the ASC model is: Therefore, by optimising the lower bound (Eq. 11), the model balances the selection of keywords for the summaries and the efficacy of the composed compressions, corresponding to the reconstruction error and KL divergence respectively. In practise, the pre-trained language model prior p(c) prefers short sentences for compressions. As one of the drawbacks of VAEs, the KL divergence term in the lower bound pushes every sample drawn

Compression (Combined Pointer Networks)
Selected from V Thus acting to regularise the posterior, but also to restrict the learning of the encoder. If the estimator keeps sampling short compressions during inference, the LSTM decoder would gradually rely on the contexts from the decoded words instead of the information provided by the compressions, which does not yield the best performance on sentence compression.
Here, we introduce a co-efficient λ to scale the learning signal of the KL divergence: Although we are not optimising the exact variational lower bound, the ultimate goal of learning an effective compression model is mostly up to the reconstruction error. In Section 6, we empirically apply λ = 0.1 for all the experiments on ASC model. Interestingly, λ controls the compression rate of the sentences which can be a good point to be explored in future work.
During the inference, we have different strategies for updating the parameters of φ and θ. For the parameters θ in the reconstruction model, we directly update them by the gradients: where we draw M samples c (m) ∼ q φ (c|s) independently for computing the stochastic gradients. For the parameters φ in the compression model, we firstly define the learning signal, l(s, c) = log p θ (s|c) − λ(log q φ (c|s) − log p(c)).
Then, we update the parameters φ by: However, this gradient estimator has a big variance because the learning signal l(s, c (m) ) relies on the samples from q φ (c|s). Therefore, following the RE-INFORCE algorithm, we introduce two baselines b and b(s), the centred learning signal and inputdependent baseline respectively, to help reduce the variance.
Here, we build an MLP to implement the inputdependent baseline b(s). During training, we learn the two baselines by minimising the expectation: Hence, the gradients w.r.t. φ are derived as, which is basically a likelihood-ratio estimator.

Forced-attention Sentence Compression
In neural variational inference, the effectiveness of training largely depends on the quality of the inference network gradient estimator. Although we introduce a biased estimator by using pointer networks, it is still very difficult for the compression model to generate reasonable natural language sentences at the early stage of learning, which results in high-variance for the gradient estimator. Here, we introduce our supervised forced-attention sentence compression (FSC) model to teach the compression model to generate coherent compressed sentences.
Neither directly replicating the pointer network of ASC model, nor using a typical sequence-tosequence model, the FSC model employs a forceattention strategy (Figure 2) that encourages the compressor to select words appearing in the source sentence but keeps the original full output vocabulary V . The force-attention strategy is basically a combined pointer network that chooses whether to select a word from the source sentence s or to predict a word from V at each recurrent state. Hence, the combined pointer network learns to copy the source words while predicting the word sequences of compressions. By sharing the pointer networks between the ASC and FSC model, the biased estimator obtains further positive biases by training on a small set of labelled source-compression pairs.
Here, the FSC model makes use of the compression model (Eq. 1 to 4) in the ASC model, where α j (i), i ∈ (1, . . . , |s|) denotes the probability of selecting s i as the prediction for c j . On the basis of the pointer network, we further introduce the probability of predicting c j that is selected from the full vocabulary, where β j (w), w ∈ (1, . . . , |V |) denotes the probability of selecting the wth from V as the prediction for c j . To combine these two probabilities in the RNN, we define a selection factor t for each state output, which computes the semantic similarities between the current state and the attention vector, Hence, the probability distribution over compressed words is defined as, Essentially, the FSC model is the extended compression model of ASC by incorporating the pointer network with a softmax output layer over the full vocabulary. So we employ φ to denote the parameters of the FSC model p φ (c|s), which covers the parameters of the variational distribution q φ (c|s).

Semi-supervised Training
As the auto-encoding sentence compression (ASC) model grants the ability to make use of an unlabelled dataset, we explore a semi-supervised training framework for the ASC and FSC models. In this scenario we have a labelled dataset that contains source-compression parallel sentences, (s, c) ∈ L, and an unlabelled dataset that contains only source sentences s ∈ U. The FSC model is trained on L so that we are able to learn the compression model by maximising the log-probability, While the ASC model is trained on U, where we maximise the modified variational lower bound, The joint objective function of the semi-supervised learning is, Hence, the pointer network is trained on both unlabelled data, U, and labelled data, L, by a mixed criterion of REINFORCE and cross-entropy.

Related Work
As one of the typical sequence-to-sequence tasks, sentence-level summarisation has been explored by a series of discriminative encoder-decoder neural models. Filippova et al. (2015) Gu et al. (2016) also apply the similar idea of combining pointer networks and softmax output. However, different from all these discriminative models above, we explore generative models for sentence compression. Instead of training the discriminative model on a big labelled dataset, our original intuition of introducing a combined pointer networks is to bridge the unsupervised generative model (ASC) and supervised model (FSC) so that we could utilise a large additional dataset, either labelled or unlabelled, to boost the compression performance. Dai and Le (2015) also explored semi-supervised sequence learning, but in a pure deterministic model focused on learning better vector representations.
Recently variational auto-encoders have been applied in a variety of fields as deep generative models. In computer vision Kingma and Welling (2014) proposes a generative model that explicitly extracts syntactic relationships among words and phrases which further supports the argument that generative models can be a statistically efficient method for learning neural networks from small data.

Dataset & Setup
We evaluate the proposed models on the standard Gigaword 3 sentence compression dataset. This dataset was generated by pairing the headline of each article with its first sentence to create a source-compression pair. Rush et al. (2015) provided scripts 4 to filter out outliers, resulting in roughly 3.8M training pairs, a 400K validation set, and a 400K test set. In the following experiments all models are trained on the training set with different data sizes 5 and tested on a 2K subset, which is identical to the test set used by Rush et al. (2015) and . We decode the sentences by k = 5 Beam search and test with full-length Rouge score.
For the ASC and FSC models, we use 256 for the dimension of both hidden units and lookup tables. In the ASC model, we apply a 3-layer bidirectional RNN with skip connections as the encoder, a 3-layer RNN pointer network with skip connections as the compressor, and a 1-layer vanilla RNN with soft attention as the decoder. The language model prior is trained on the article sentences of the full training set using a 3-layer vanilla RNN with 0.5 dropout. To lower the computational cost, we apply different vocabulary sizes for encoder and compressor (119,506 and 68,897) which corresponds to the settings of Rush et al. (2015). Specifically, the vocabulary of the decoder is filtered by taking the most frequent 10,000 words from the vocabulary of the encoder, where the rest of the words are tagged as '<unk>'. In further consideration of efficiency, we use only one sample for the gradient estimator. We optimise the model by Adam (Kingma and Ba, 2015) with a 0.0002 learning rate and 64 sentences per batch. The model converges in 5 epochs. Except for the pretrained language model, we do not use dropout or embedding initialisation for ASC and FSC models.

Extractive Summarisation
The first set of experiments evaluate the models on extractive summarisation. Here, we denote the joint (2) R-1, R-2 and R-L represent the Rouge-1, Rouge-2 and Rouge-L score respectively. models by ASC+FSC 1 and ASC+FSC 2 where ASC is trained on unlabelled data and FSC is trained on labelled data. The ASC+FSC 1 model employs equivalent sized labelled and unlabelled datasets, where the article sentences of the unlabelled data are the same article sentences in the labelled data, so there is no additional unlabelled data applied in this case. The ASC+FSC 2 model employs the full unlabelled dataset in addition to the existing labelled dataset, which is the true semi-supervised setting. Table 1 presents the test Rouge score on extractive compression. We can see that the ASC+FSC 1 model achieves significant improvements on F-1 scores when compared to the supervised FSC model only trained on labelled data. Moreover, fixing the labelled data size, the ASC+FSC 2 model achieves better performance by using additional unlabelled data than the ASC+FSC 1 model, which means the semi-supervised learning works in this scenario. Interestingly, learning on the unlabelled data largely increases the precisions (though the recalls do not benefit from it) which leads to significant improvements on the F-1 Rouge scores. And surprisingly, the extractive ASC+FSC 1 model trained on full labelled data outperforms the abstractive NABS (Rush et al., 2015) baseline model (in Table 4).

Abstractive Summarisation
The second set of experiments evaluate performance on abstractive summarisation (Table 2). Consistently, we see that adding the generative objective to the discriminative model (ASC+FSC 1 ) results in a significant boost on all the Rouge scores, while employing extra unlabelled data increase performance further (ASC+FSC 2 ). This validates the effectiveness of transferring the knowledge learned on unlabelled data to the supervised abstractive summarisation.
In Figure 3, we present the validation perplexity to compare the abilities of the three models to learn the compression languages. The ASC+FSC 1 (red) employs the same dataset for unlabelled and labelled training, while the ASC+FSC 2 (black) employs the full unlabelled dataset. Here, the joint ASC+FSC 1 model obtains better perplexities than the single discriminative FSC model, but there is not much difference between ASC+FSC 1 and ASC+FSC 2 when the size of the labelled dataset grows. From the perspective of language modelling, the generative ASC model indeed helps the discriminative model learn to generate good summary sentences. Table 3 displays the validation perplexities of the benchmark models, where the joint ASC+FSC 1 model trained on the full labelled and unlabelled datasets performs the best on modelling compression languages. Table 4 compares the test Rouge score on abstractive summarisation. Encouragingly, the semisupervised model ASC+FSC 2 outperforms the baseline model NABS when trained on 500K supervised pairs, which is only about an eighth of the supervised data. In , the authors exploit the full limits of discriminative RNN encoderdecoder models by incorporating a sampled softmax, expanded vocabulary, additional lexical features, and combined pointer networks 6 , which yields the best performance listed in Table 4. However, when all the data is employed with the mixed ob- Model Labelled Data R-1 R-2 R-L (Rush et al., 2015) 3

Discussion
From the perspective of generative models, a significant contribution of our work is a process for reducing variance for discrete sampling-based variational inference. The first step is to introduce two baselines in the control variates method due to the fact that the reparameterisation trick is not applica-  Figure 3: Perplexity on validation dataset.
ble for discrete latent variables. However it is the second step of using a pointer network as the biased estimator that makes the key contribution. This results in a much smaller state space, bounded by the length of the source sentence (mostly between 20 and 50 tokens), compared to the full vocabulary. The final step is to apply the FSC model to transfer the knowledge learned from the supervised data to the pointer network. This further reduces the sampling variance by acting as a sort of bootstrap or constraint on the unsupervised latent space which could encode almost anything but which thus becomes biased towards matching the supervised distribution. By using these variance reduction methods, the ASC model is able to carry out effective variational inference for the latent language model so that it learns to summarise the sentences from the large unlabelled training data.
In a different vein, according to the reinforcement learning interpretation of sequence level training (Ranzato et al., 2016), the compression model of the ASC model acts as an agent which iteratively generates words (takes actions) to compose the com-pression sentence and the reconstruction model acts as the reward function evaluating the quality of the compressed sentence which is provided as a reward signal. Ranzato et al. (2016) presents a thorough empirical evaluation on three different NLP tasks by using additional sequence-level reward (BLEU and Rouge-2) to train the models. In the context of this paper, we apply a variational lower bound (mixed reconstruction error and KL divergence regularisation) instead of the explicit Rouge score. Thus the ASC model is granted the ability to explore unlimited unlabelled data resources. In addition we introduce a supervised FSC model to teach the compression model to generate stable sequences instead of starting with a random policy. In this case, the pointer network that bridges the supervised and unsupervised model is trained by a mixed criterion of REINFORCE and cross-entropy in an incremental learning framework. Eventually, according to the experimental results, the joint ASC and FSC model is able to learn a robust compression model by exploring both labelled and unlabelled data, which outperforms the other single discriminative compression models that are only trained by cross-entropy reward signal.

Conclusion
In this paper we have introduced a generative model for jointly modelling pairs of sequences and evaluated its efficacy on the task of sentence compression. The variational auto-encoding framework provided an effective inference algorithm for this approach and also allowed us to explore combinations of discriminative (FSC) and generative (ASC) compression models. The evaluation results show that supervised training of the combination of these models improves upon the state-of-the-art performance for the Gigaword compression dataset. When we train the supervised FSC model on a small amount of labelled data and the unsupervised ASC model on a large set of unlabelled data the combined model is able to outperform previously reported benchmarks trained on a great deal more supervised data. These results demonstrate that we are able to model language as a discrete latent variable in a variational auto-encoding framework and that the resultant generative model is able to effectively exploit both supervised and unsupervised data in sequence-to-sequence tasks.
src the sri lankan government on wednesday announced the closure of government schools with immediate effect as a military campaign against tamil separatists escalated in the north of the country . ref sri lanka closes schools as war escalates asca sri lanka closes government schools asce sri lankan government closure schools escalated fsca sri lankan government closure with tamil rebels closure src factory orders for manufactured goods rose #.# percent in september , the commerce department said here thursday . ref us september factory orders up #.# percent asca us factory orders up #.# percent in september asce factory orders rose #.# percent in september fsca factory orders #.# percent in september src hong kong signed a breakthrough air services agreement with the united states on friday that will allow us airlines to carry freight to asian destinations via the territory . ref hong kong us sign breakthrough aviation pact asca us hong kong sign air services agreement asce hong kong signed air services agreement with united states fsca hong kong signed air services pact with united states src a swedish un soldier in bosnia was shot and killed by a stray bullet on tuesday in an incident authorities are calling an accident , military officials in stockholm said tuesday . ref swedish un soldier in bosnia killed by stray bullet asca swedish un soldier killed in bosnia asce swedish un soldier shot and killed fsca swedish soldier shot and killed in bosnia src tea scores on the fourth day of the second test between australia and pakistan here monday . ref australia vs pakistan tea scorecard asca australia v pakistan tea scores asce australia tea scores fsca tea scores on #th day of #nd test src india won the toss and chose to bat on the opening day in the opening test against west indies at the antigua recreation ground on friday . ref india win toss and elect to bat in first test asca india win toss and bat against west indies asce india won toss on opening day against west indies fsca india chose to bat on opening day against west indies src a powerful bomb exploded outside a navy base near the sri lankan capital colombo tuesday , seriously wounding at least one person , military officials said . ref bomb attack outside srilanka navy base asca bomb explodes outside sri lanka navy base asce bomb outside sri lankan navy base wounding one fsca bomb exploded outside sri lankan navy base src press freedom in algeria remains at risk despite the release on wednesday of prominent newspaper editor mohamed <unk> after a two-year prison sentence , human rights organizations said . ref algerian press freedom at risk despite editor 's release <unk> picture asca algeria press freedom remains at risk asce algeria press freedom remains at risk fsca press freedom in algeria at risk Table 5: Examples of the compression sentences. src and ref are the source and reference sentences provided in the test set. asc a and asc e are the abstractive and extractive compression sentences decoded by the joint model ASC+FSC 1 , and fsc a denotes the abstractive compression obtained by the FSC model.