Length bias in Encoder Decoder Models and a Case for Global Conditioning

Encoder-decoder networks are popular for probabilistic modeling sequences in many applications. These models use the power of the Long Short-Term Memory (LSTM) architecture to capture the full dependence among variables and are not subject to label bias of locally conditioned models that assume partial conditional independence. However in practice they exhibit a bias towards short sequences even when using a beam search to find the optimal sequence. Surprisingly, sometimes there is even a decline in accuracy with increasing the beam size. In this paper we show that such phenomena are due to a discrepancy between the full sequence margin and the per-element margin enforced by the locally conditioned training objective of a encoder-decoder model. The discrepancy more adversely impacts long sequences, explaining the bias towards predicting short sequences. For the case where the predicted sequences come from a closed set, we show that a globally conditioned model alleviates the above problems of encoder-decoder models. From a practical point of view, our proposed model also eliminates the need for a beam-search during inference, which reduces to an efficient dot-product based search in a vector-space.


Introduction
In this paper we investigate the use of neural networks for modeling the conditional distribution Pr(y|x) over sequences y of discrete tokens in response to a complex input x, which can be another sequence or an image.Such models have applications in machine translation (Bahdanau et al., 2014;Sutskever et al., 2014), image captioning (Vinyals et al., 2015), response generation in emails (Kannan et al., 2016), and conversations (Khaitan, 2016;Vinyals and Le, 2015;Li et al., 2015).
The most popular neural network for probabilistic modeling of sequences in the above applications is the encoder-decoder (ED) network (Sutskever et al., 2014).A ED network first encodes an input x into a vector which is then used to initialize a recurrent neural network (RNN) for decoding the output y.The decoder RNN factorizes Pr(y|x) using the chain rule as j Pr(y j |y 1 , . . ., y j−1 , x) where y 1 , . . ., y n denote the tokens in y.This factorization does not entail any conditional independence assumption among the {y j } variables.This is unlike earlier sequence models like CRFs (Lafferty et al., 2001) and MeMMs (McCallum et al., 2000) that typically assume that a token is independent of all other tokens given its adjacent tokens.Modern-day RNNs like LSTMs promise to capture non-adjacent and long-term dependencies by summarizing the set of previous tokens in a continuous, high-dimensional state vector.Within the limits of parameter capacity allocated to the model, the ED, by virtue of exactly factorizing the token sequence, is consistent.
However, when we created and deployed an ED model for a chat suggestion task we observed several counter-intuitive patterns in its predicted outputs.Even after training the model over billions of examples, the predictions were systematically biased to-arXiv:1606.03402v1[cs.AI] 10 Jun 2016 wards short sequences.Such bias has also been seen in translation (Cho et al., 2014).Another curious phenomenon was that the accuracy of the predictions sometimes dropped with increasing beam-size, more than could be explained by statistical variations of a well-calibrated model (Ranzato et al., 2016).One conjecture is that since ED models locally normalize the probability around each token they are subject to label bias as in locally conditioned MeMM models.However, all illustrations of label-bias use local observations for transitions (Bottou, 1991;Lafferty et al., 2001;Andor et al., 2016).In contrast, the ED model transitions on the entire input and chain rule is an exact factorization of the distribution.Indeed one of the suggestions in (Bottou, 1991) to surmount label-bias is to use a fully connected network, which the ED model already does.
In this paper we expose a margin discrepancy in the training loss of encoder-decoder models to explain the above problems in its predictions.We show that the training loss of ED network often underestimates the margin of separating a correct sequence from an incorrect shorter sequence.The discrepancy gets more severe as the length of the correct sequence increases.That is, even after the training loss converges to a small value, full inference on the training data can incur errors causing the model to be underfitted for long sequences in spite of low training cost.We call this the length bias problem.
We propose an alternative model that avoids the margin discrepancy by globally conditioning the P (y|x) distribution.Our model is applicable in the many practical tasks where the space of allowed outputs is closed.For example, the responses generated by the smart reply feature of Inbox is restricted to lie within a hand-screened whitelist of responses W ⊂ Y (Kannan et al., 2016), and the same holds for a recent conversation assistant feature of Google's Allo (Khaitan, 2016).Our model uses a second RNN encoder to represent the output as another fixed length vector.We show that our proposed encoderencoder model produces better calibrated whole sequence probabilities and alleviates the length-bias problem of ED models on two conversation tasks.A second advantage of our model is that inference is significantly faster than ED models and is guaranteed to find the globally optimal solution.In contrast, inference in ED models requires an expensive beam-search which is both slow and is not guaranteed to find the optimal sequence.

Length Bias in Encoder-Decoder Models
In this section we analyze the widely used encoderdecoder neural network for modeling Pr(y|x) over the space of discrete output sequences.We use y 1 , . . ., y n to denote the tokens in a sequence y.Each y i is a discrete symbol from a finite dictionary V of size m.Typically, m is large.The length n of a sequence is allowed to vary from sequence to sequence even for the same input x.A special token EOS ∈ V is used to mark the end of a sequence.We use Y to denote the space of such valid sequences and θ to denote the parameters of the model.

The encoder-decoder network
The Encoder-Decoder (ED) network represents Pr(y|x, θ) by applying chain rule to exactly factorize it as n t=1 Pr(y t |y 1 , . . ., y t−1 , x, θ).First, an encoder with parameters θ x ⊂ θ is used to transform x into a d-dimensional real-vector v x .The network used for the encoder depends on the form of xfor example, when x is also a sequence, the encoder could be a RNN.The decoder then computes each Pr(y t |y 1 , . . ., y t−1 , v x , θ) as Pr(y t |y 1 , . . ., y t−1 , v x , θ) = P (y t |s t , θ), (1) where s t is a state vector implemented using a recurrent neural network as ( where RNN() is typically a stack of LSTM cells that captures long-term dependencies, θ E,y ⊂ θ are parameters denoting the embedding for token y, and θ R ⊂ θ are the parameters of the RNN.The function P (y|s, θ y ) that outputs the distribution over the m tokens is a softmax: P (y|s, θ) = e θ S,y s e θ S,1 s + . . .+ e θ S,m s , where θ S,y ⊂ θ denotes the parameters for token y in the final softmax.

The Origin of Length Bias
The ED network builds a single probability distribution over sequences of arbitrary length.For an input x, the network needs to choose the highest probability y among valid candidate sequences of widely different lengths.Unlike in applications like entity-tagging and parsing where the length of the output is determined based on the input, in applications like response generation valid outputs can be of widely varying length.Therefore, Pr(y|x, θ) should be well-calibrated over all sequence lengths.Indeed under infinite data and model capacity the ED model is consistent and will represent all sequence lengths faithfully.In practice when training data is finite, we show that the ED model is biased against long sequences.Other researchers (Cho et al., 2014) have reported this bias but we are not aware of any analysis like ours explaining the reasons of this bias.
Claim 2.1.The training loss of the ED model underestimates the margin of separating long sequences from short ones.
Proof.Let x be an input for which a correct output y + is of length and an incorrect output y − is of length 1. Ideally, the training loss should put a positive margin between y + and y − which is log Pr(y + |x) − log(Pr(y − |x).Let us investigate if the maximum likelihood training objective of the ED model achieves that.We can write this objective as follows: (4) Only the first term in the above objective is involved in enforcing a margin between y + and y − because log Pr(y It is easy to see that our desired margin between y + and y − is log Pr(y Assuming two possible labels for the first position (m = 2), the training objective in Equation 4 can now be rewritten in terms of the margins as: We next argue that this objective is not aligned for our ideal goal of making m L + m R positive.
First, note that m R is a log probability which under finite parameters will be non-zero.Second, even though m L can take any arbitrary finite value, the training objective drops rapidly when m L is positive.When training is regularized for finite data, the trainer will converge at a small positive value of m L .Finally, we show that the value of m R decreases with increasing sequence length.For each position j in the sequence, we add to m R log-probability of y + j whose maximum value is log(1 − ) where is non-zero and decreasing with the magnitude of the parameters θ.In general, the log-probability can be a much smaller negative value when the input x has multiple correct responses as is common in conversation tasks.For example, an input like x ='How are you?', has many possible correct outputs: y ∈{'I am good', 'I am great', 'I am fine, how about you?', etc}.We illustrate a worst case value of m R .Let the set of correct outputs for an x be of length with a '1' at the first position and '0' or '1' with equal frequency in all other positions.Therefore, m R will be estimated to be close to − log 2. This implies that our desired margin m g = m L + m R could be reduced by up to log 2 and may not remain positive even though m L is positive.This mismatch between the local margin optimized during training and the global margin explains the length bias observed by us and others (Cho et al., 2014).During inference a shorter sequence for which m R is smaller wins over larger sequences.This mismatch also explains why increasing beam size leads to a drop in accuracy sometimes (Ranzato et al., 2016) 1 .
When beam size is large, we are more likely to dig out short sequences that have otherwise been separated by the local margin.We show empirically in Section 4.3 that for long sequences larger beam size hurts accuracy whereas for small sequences the effect is the opposite.
1 Figure 6 in the paper shows a drop in BLEU score drops by 0.5 as the beam size is increased from 3 to 10.

Proposed fixes to the ED models
Many ad hoc approaches have been used to alleviate length bias directly or indirectly.Some resort to normalizing the probability by the full sequence length (Cho et al., 2014;Graves, 2013) whereas (Abadie et al., 2014) proposes segmenting longer sentences into shorter phrases.(Cho et al., 2014) conjectures that the length bias of ED models could be because of limited representation power of the encoder network.Later more powerful encoders based on attention achieved greater accuracy (Bahdanau et al., 2014) on long sequences.Attention can be viewed as a mechanism of improving the capacity of the local models, thereby making the local margin m L more definitive.But attention is not effective for all tasks -for example, (Vinyals and Le, 2015) report that attention was not useful for conversation.
Recently (Bengio et al., 2015;Ranzato et al., 2016) propose another modification to the ED training objective where the true token y j−1 in the training term log Pr(y j |y 1 , . . ., y j−1 ) is replaced by a sample or top-k modes from the posterior at position j − 1 via a careful schedule.Incidently, this fix also helps to indirectly alleviate the length bias problem.The sampling causes incorrect tokens to be used as previous history for producing a correct token.If earlier the incorrect token was followed by a low-entropy EOS token, now that state should also admit the correct token causing a decrease in the probability of EOS, and therefore the short sequence.
In the next section we propose our more direct fix to the margin discrepancy problem.

Globally Conditioned Encoder-Encoder Models
We represent Pr(y|x, θ) as a globally conditioned model e s(y|x,θ) Z(x,θ) where s(y|x, θ) denotes a score for output y and Z(x, θ) denotes the shared normalizer.We show in Section 3.3 why such global conditioning solves the margin discrepancy problem of the ED model.The intractable partition function in global conditioning introduces several new challenges during training and inference.In this section we discuss how we designed our network to address them.
Our model assumes that during inference the output has to be selected from a given whitelist of responses W ⊂ Y.In spite of this restriction, the problem does not reduce to multi-class classification because of two important reasons.First, during training we wish to tap all available input-output pairs including the significantly more abundant outputs that do not come from the whitelist.Second, the whitelist could be very large and treating each output sequence as an atomic class can limit generalization achievable by modeling at the level of tokens in the sequence.

Modeling s(y|x, θ)
We use a second encoder to convert y into a vector v y of the same size as the vector v x obtained by encoding x as in a ED network.The parameters used to encode v x and v y are disjoint.As we are only interested in a fixed dimensional output, unlike in ED networks, we have complete freedom in choosing the type of network to use for this second encoder.For our experiments, we have chosen to use an RNN with LSTM cells.Experimenting with other network architectures, such as bidirectional RNNs remains an interesting avenue for future work.The score s(y|x, θ) is the dot-product between v y and v x .

Training and Inference
During training we use maximum likelihood to estimate θ given a large set of valid input-output pairs {(x 1 , y 1 ), . . ., (x N , y N )} where each y i belongs to Y which in general is much larger than W. Our main challenge during training is that Y is intractably large for computing Z.We decompose Z as and then resort to estimating the last term using importance sampling.Constructing a high quality proposal distribution over Y /y is difficult in its own right, so in practice, we make the following approximations.We extract the most common T sequences across a data set into a pool of negative examples.We estimate the empirical prior probability of the sequences in that pool, Q(y), and then draw k samples from this distribution.We take care to remove the true sequence from this distribution so as to remove the need to estimate its prior probability.
During inference, given an input x we need to find argmax y∈W s(y|x, θ).This task can be performed efficiently in our network because the vectors v y for the sequences y in the whitelist W can be precomputed.Given an input x, we compute v x and take dot-product with the pre-computed vectors to find the highest scoring response.This gives us the optimal response.When W is very large, we can obtain an approximate solution by indexing the vectors v y of W using recent methods specifically designed for dot-product based retrieval (Guo et al., 2016).

Margin
It is well-known that the maximum likelihood training objective of a globally normalized model is margin maximizing for all outputs (Rosset et al., 2003).We show how this works for our earlier example of one correct sequence y + of length and an incorrect y − of length 1.The training objective is now This objective directly maximizes the margin log Pr(y + ) − log Pr(y − ) of separating the long y + from the short y − .In Figure 2 we show this gap pictorially for the example of Section 2.2 where the set of possible correct outputs are all of length = 5 with a '1' at the first position and '0' or '1' with equal frequency in all other positions.We plot the loss of  the ED model and our globally conditioned model against increasing steps of training with a stochastic gradient optimizer.We also plot the margin of separating with a negative y − comprising only of a '0' at the first position.We see that the loss drops and margin increases for both models as training progresses.But, the margin for the ED model stays negative even up to step 1700 when the ED loss is close to its minimum value and the local margin m L is 3.4.In contrast, a globally conditioned model gets a positive margin early during training.

Datasets and Tasks
We contrast the quality of the ED and encoderencoder models on two conversational datasets: Open Subtitles and Reddit Comments.

Open Subtitles Dataset
The Open Subtitles dataset consists of transcriptions of spoken dialog in movies and television shows (Lison and Tiedemann, 2016).We restrict our modeling only to the English subtitles, of which results in 319 million utternaces.Each utterance is tokenized into word and punctuation tokens, with the start and end marked by the BOS and EOS tokens.We randomly split out 90% of the utterances into the training set, placing the rest into the validation set.As the speaker information is not present in this data set, we treat each utterance as a label sequence, with the preceding utterances as context.

Reddit Comments Dataset
The Reddit Comments dataset is constructed from publicly available user comments on submissions on the Reddit website.Each submission is associated with a list of directed comment trees.In total, there are 41 million submissions and 501 million comments.We tokenize the individual comments in the same way as we have done with the utternaces in the Open Subtitles dataset.We randomly split 90% of the submissions and the associated comments into the training set, and the rest into the validation set.We use each comment (except the ones with no parent comments) as a label sequence, with the context sequence composed of its ancestor comments.

Whitelist and Vocabulary
From each dataset, we derived a dictionary of 20 thousand most commonly used tokens.Additionally, each dictionary contained the unknown token (UNK), BOS and EOS tokens.Tokens in the datasets which were not present in their associated vocabularies were replaced by the UNK token.
From each data set, we extracted 10 million most common label sequences that also contained at most 100 tokens.This set of sequences was used as the negative sample pool for the encoder-encoder models.
100 thousand most common sequences out of those sets were used for evaluation.We removed any sequence from this set that contained any UNK tokens to simplify inference.

Sequence Prediction Task
To evaluate the quality of these models, we task them to predict the true label sequence given its context.Due to the computational expense, we sub-sample the validation data sets to around 1 million context-label pairs.We additionally restrict the context-label pairs such that the label sequence is present in the evaluation set of common messages.We use recall@K as a measure of accuracy of the model predictions.It is defined as the fraction of test pairs where the correct label is within K most probable predictions according to the model.For encoder-encoder models we use an exhaustive search over the evaluation set of common messages.For ED models we use a beam search with width ranging from 1 to 15 over a token prefix trie constructed from that set of messages.

Model Structure and Training Procedure
The context encoder, label encoder and decoder are implemented using LSTM recurrent networks (Hochreiter and Schmidhuber, 1997) with peephole connections (Sak et al., 2014).The context and label token sequences were mapped to embedding vectors using a lookup table that is trained jointly with the rest of the model parameters.The recurrent nets were unrolled in time up to 100 time-steps, with label sequences of greater length discarded and context sequences of greater length truncated.
The decoder in the ED model is trained by using the true label sequence prefix as input, and a shifted label sequence as output (Sutskever et al., 2014).The partition function in the softmax over tokens is estimated using importance sampling with a unigram distribution over tokens as the proposal distribution (Jean et al., 2014).We sample 512 negative examples from Q(y) to estimate the partition function for the encoder-encoder model.See Figure 1 for connectivity and network size details.
All models were trained using Adagrad (Duchi et al., 2011) with an initial base learning rate of 0.1 which we exponentially decayed with a decade of 15 million steps.For stability, we clip the L2 norm of the gradients to a maximum magnitude of 1 as described in (Pascanu et al., 2012).All models are trained for 30 million steps with a mini-batch size of 64.The models are trained in a distributed manner on CPUs and NVidia GPUs using TensorFlow (Abadi et al., 2015).

Results
We first demonstrate the discrepancy between the local and global margin in the ED models as discussed in Section 3.3.We used a beam size of 15 to get the top-1 prediction from our trained ED models on the test data and focussed on the subset for which the top-1 prediction was incorrect.We measured local and global margin between the top-1 predicted sequence (y − ) and the correct test sequence (y + ) as follows: Global margin is the difference in their full sequence log probability.Local margin is the difference in the local token probability of the smallest position j where y − j = y + j , that is local margin is Pr(y + j |y + 1...j−1 , x, θ) − Pr(y − j |y + 1...j−1 , x, θ).Note the training loss of ED models directly compares only the local margin.
In Figure 3 we show the local and global margin as a 2D histogram with color luminosity denoting frequency.We observe that the local margin values are much smaller than the global margins.The prominent spine is for (y + , y − ) pairs differing only in a single position making the local and global margins equal.Most of the mass is below the spine.For a significant fraction of cases (27% for Reddit, and 21% for Subtitles), the local margin is positive while the global margin is negative.That is, the ED loss for these sequences is small even though the log-probability of the correct sequence is much smaller than the logprobability of the predicted wrong sequence.
An interesting side observation from the plots in Figure 3 is that more than 98% of the wrong predictions have a negative margin, that is, the score of the correct sequence is indeed lower than the score of the wrong prediction.Improving the beam-width beyond 15 is not likely to improve these models since only in 1.9% and 1.7% of the cases is the correct score higher than the score of the wrong prediction.
In Figure 4 we show that this discrepancy is significantly more pronounced for longer sequences.In the figure we show the fraction of wrongly predicted sequences with a positive local margin.We find that as sequence length increases, we have more cases where the local margin is positive yet the global margin is negative.For example, for the Reddit dataset half of the wrongly predicted sequences have a positive local margin indicating that the training loss was low for these sequences even though they were not adequately separated.Next we show why this discrepancy leads to nonmonotonic accuracies with increasing beam-size.As beam size increases, the predicted sequence has higher probability and the accuracy is expected to increase if the trained probabilities are well-calibrated.In Figure 5 we plot the number of correct predictions (on a log scale) against the length of the correct sequence for beam sizes of 1, 5, 10, and 15.For small sequence lengths, we indeed observe that increasing the beam size produces more accurate results.For longer sequences, however, for both data sets, we observe that increasing the beam width beyond a certain threshold reduces accuracy.We next compare the ED model with our globally conditioned encoder-encoder (EE) model.In Figure 6 we show the recall@K values for K=1, 3 and 5 for the two datasets for increasing length of correct sequence.We find the EE model is largely better that the ED model.The most interesting difference is that for sequences of length greater than 8, the ED model has a recall of zero even at K=5 for both datasets.In contrast, the EE model manages to achieve significant recall even at large sequence lengths.

Related Work
In this paper we showed length bias in RNNs as deployed in encoder-decoder models and showed how to reduce it using global conditioning.Global conditioning has been proposed for other RNN-based sequence prediction tasks in (Yao et al., 2014) and (Andor et al., 2016).The RNN models that these work attempt to fix capture only a weak form of dependency among variables, for example they assume x is seen incrementally and only adjacent labels in y are directly dependent.As proved in (2016) these models are subject to label bias since they cannot represent a distribution that a globally conditioned model can.Thus, their fix for global dependency is using a CRFs.Such global conditioning will compromise a ED model which does not assume any conditional independence among variables.The label-bias proof of (2016) is not applicable to ED models because the proof rests on the entire input not being visible during output.
Our encoder-encoder network is reminiscent of Siamese networks used in (Severyn and Moschitti, 2015) for ranking document pairs in a QA ranking task.One crucial difference is that in our case the two encoders play very different roles and share neither the structure nor the parameters.It is also similar to work by (Al-Rfou et al., 2016), with the key difference being our focus on computing the normalized conditional probability, and our use of RNN encoders.
( Ranzato et al., 2016) also highlights limitations of the ED model and proposes to mix the ED loss with a sequence-level loss in a reinforcement learning framework under a carefully tuned schedule.Our method for global conditioning can capture sequencelevel losses like BLEU score more easily, but may also benefit from a similar mixed loss function.

Conclusion
We have shown that encoder-decoder models in the regime of finite data and parameters suffer from a length-bias problem.We have proved that this arises due to the locally normalized models insufficiently separating correct sequences from incorrect ones, and have verified this empirically.Our proposed encoderencoder architecture side steps this issue by operating in sequence probability space directly, yielding improved accuracy for longer sequences.
One weakness of our proposed architecture is that it cannot generate responses directly.An interesting future work is to explore if the ED model can be used to generate a candidate set of responses which are then re-ranked by our globally conditioned model.Another future area is to see if the techniques for making Bayesian networks discriminative can fix the length bias of encoder decoder networks (Peharz et al., 2013;Guo et al., 2012).

Figure 2 :
Figure 2: Comparing loss and margin of ED model with a globally conditioned model on example dataset of Section 3.3.

Figure 3 :
Figure 3: Local margin versus global margin for incorrectly predicted sequences.The color luminosity is proportional to frequency.

Figure 4 :
Figure 4: Fraction of incorrect predictions with positive local margin.

Figure 5 :
Figure 5: Effect of beam width on the number of correct predictions broken down by sequence length.
Neural network architectures used in our experiments.The context encoder network is used for both encoder-encoder and encoder-decoder models to encode the context sequence ('A') into a vx.For the encoder-encoder model, label sequence ('B') are encoded into vy by the label encoder network.For the encoder-decoder network, the label sequence is decomposed using the chain rule by the decoder network.