Generative Bridging Network in Neural Sequence Prediction

Maximum Likelihood Estimation(MLE) has been known to pose data sparsity challenge in sequence prediction tasks, in order to alleviate data sparseness, we propose a novel framework to train sequence model via a bridging process. Unlike MLE which optimizes the sequence generator by directly maximizing the likelihood of ground truth sequence given the input, our proposed framework designs a bridge to connect generator with ground truth. During training, we first follow certain constraints to transform the pointwise ground truth as a bridge distribution, then match the generator's output distribution with the transformed bridge distribution by minimizing their KL-divergence. By imposing different constraints, bridge distribution will adopt different properties. In order to increase output diversity, enhance language smoothness and lower learning burden, we design three different regularization constraints to construct different bridge distributions. Combining these bridges with sequence generator, we can build three parallel generative bridging networks, namely uniform GBN, language-model GBN and coaching GBN. Experimental results on two recognized sequence prediction tasks have shown that GBN can yield significant improvements over the baseline system. Furthermore, we draw samples from three bridge distributions to analyze their different properties and verify their influences on the sequence model learning.


Introduction
Sequence prediction has been widely used in tasks where the outputs are sequentially structured and mutually dependent. Recently, massive explorations in this area have been made to solve practical problems, such as machine translation Ma et al., 2017;Norouzi et al., 2016), syntactic parsing (Vinyals et al., 2015), spelling correction , image captioning (Xu et al., 2015) and speech recognition (Chorowski et al., 2015). Armed with modern computation power, deep LSTM (Hochreiter and Schmidhuber, 1997) or GRU (Chung et al., 2014) based neural sequence prediction models have achieved the state-of-the-art performance.
The typical training algorithm for sequence prediction is Maximum Likelihood Estimation (MLE), which maximizes the likelihood of the target sequences conditioned on the source ones: Despite the popularity of MLE or teacher forcing (Doya, 1992) in neural sequence prediction tasks, two general issues are always haunting: 1). data sparsity and 2). tendency for overfitting, with which can both harm model generalization.
To combat data sparsity, different strategies have been proposed. Most of them try to take advantage of monolingual data (Sennrich et al., 2015;Zhang and Zong, 2016;Cheng et al., 2016). Others try to modify the ground truth target based on derived rules to get more similar examples for training (Norouzi et al., 2016;Ma et al., 2017). To alleviate overfitting, regularization techniques, such as confidence penalization (Pereyra et al., 2017) and posterior regularization (Zhang et al., 2017), are proposed recently.
As shown in Figure 1, we propose a novel learning architecture, titled Generative Bridging Network (GBN), to combine both of the benefits from synthetic data and regularization. Within the architecture, the bridge module (bridge) first transforms the point-wise ground truth into a bridge distribution, which can be viewed as a target proposer from whom more target examples are drawn to train the generator. By introducing different constraints, the bridge can be set or trained to possess specific property, with which the drawn samples can augment target-side data (alleviate data sparsity) while regularizing the training (avoid overfitting) of the generator network (generator).
In this paper, we introduce three different constraints to build three bridge modules. Together with the generator network, three GBN systems are constructed: 1). a uniform GBN, instantiating the constraint as a uniform distribution to penalize confidence; 2). a language-model GBN, instantiating the constraint as a pre-trained neural language model to increase language smoothness; 3). a coaching GBN, instantiating the constraint as the generator's output distribution to seek a closeto-generator distribution, which enables the bridge to draw easy-to-learn samples for the generator to learn. Without any constraint, our GBN degrades to MLE. The uniform GBN is proved to minimize KL-divergence with a so-called payoff distribution as in reward augmented maximum likelihood or RAML (Norouzi et al., 2016).
Experiments are conducted on two sequence prediction tasks, namely machine translation and abstractive text summarization. On both of them, our proposed GBNs can significantly improve task performance, compared with strong baselines. Among them, the coaching GBN achieves the best. Samples from these three different bridges are demonstrated to confirm the expected impacts they have on the training of the generator. In summary, our contributions are: • A novel GBN architecture is proposed for sequence prediction to alleviate the data sparsity and overfitting problems, where the bridge module and the generator network are integrated and jointly trained.
• Different constraints are introduced to build GBN variants: uniform GBN, language-model GBN and coaching GBN. Our GBN architecture is proved to be a generalized form of both MLE and RAML.
• All proposed GBN variants outperform the MLE baselines on machine translation and abstractive text summarization. Similar relative improvements are achieved compared to recent state-of-the-art methods in the translation task. We also demonstrate the advantage of our GBNs qualitatively by comparing ground truth and samples from bridges.

Generative Bridging Network
In this section, we first give a conceptual interpretation of our novel learning architecture which is sketched in Figure 2. Since data augmentation and regularization are two golden solutions for tackling data sparsity and overfitting issues. We are willing to design an architecture which can integrate both of their benefits. The basic idea is to use a so-called bridge which transforms Y ⇤ to an easyto-sample distribution, and then use this distribution (samples) to train and meanwhile regularize the sequence prediction model (the generator). The bridge is viewed as a conditional distribution 1 p ⌘ (Y |Y ⇤ ) to get more target Y s given Y ⇤ so as to construct more training pairs (X, Y ). In the meantime, we could inject (empirical) prior knowledge into the bridge through its optimization objective which is inspired by the design of the payoff distribution in RAML. We formulate the optimization objective with two parts in Equation (2): a) an expected similarity score computed through a similarity score function S(·, Y ⇤ ) interpolated with b) a knowledge injection constraint 2 C(p ⌘ (Y |Y ⇤ ), p c (Y )) where ↵ controls the strength of the regularization, formally, we write the objective function L B (⌘) as follows: Minimizing it empowers the bridge distribution not only to concentrate its mass around the ground truth Y ⇤ but also to adopt certain hope property from p c (Y ). With the constructed bridge distribution, we optimize the generator network P ✓ (Y |X) to match its output distribution towards the bridge distribution by minimizing their KL-divergence: In practice, the KL-divergence is approximated through sampling process detailed in Sec. 2.3. As a matter of fact, the bridge is the crux of the integration: it synthesizes new targets to alleviate data sparsity and then uses the synthetic data as regularization to overcome overfitting. Thus a regularization-by-synthetic-example approach, which is very similar to the prior-incorporationby-virtual-example method (Niyogi et al., 1998).

Generator Network
Our generator network is parameterized with the commonly used encoder-decoder architecture . The encoder is used to encode the input sequence X to a sequence of hidden states, based on which an attention mechanism is leveraged to compute context vectors at the decoding stage. The context vector together with previous decoder's hidden state and previously predicted label are used, at each time step, to compute the next hidden state and predict an output label. As claimed in Equation (3), the generator network is not trained to maximize the likelihood of the ground truth but tries best to match the bridge distribution, which is a delegate of the ground truth. We use gradient descent to optimize the KLdivergence with respect to the generator: The optimization process can be viewed as the generator maximizing the likelihood of samples tribution pc, however, we believe mathematical form of C is not restricted, which could motivate further development.
drawn from the bridge. This may alleviate data sparsity and overfitting by posing more unseen scenarios to the generator and may help the generator generalize better in test time.

Bridge Module 3
Our bridge module is designed to transform a single target example Y ⇤ to a bridge distribution p ⌘ (Y |Y ⇤ ). We design its optimization target in Equation (2) to consist of two terms, namely, a concentration requirement and a constraint. The constraint is instantiated as KLdivergence between the bridge and a contraint distribution p c (Y ). We transform Equation (2) as follows, which is convenient for mathematical manipulation later: is a predefined score function which measures similarity between Y and Y ⇤ and peaks when Y = Y ⇤ , while p c (Y ) reshapes the bridge distribution. More specifically, the first term ensures that the bridge should concentrate around the ground truth Y ⇤ , and the second introduces willing property which can help regularize the generator. The hyperparameter ⌧ can be interpreted as a temperature which scales the score function. In the following bridge specifications, the score function S(Y, Y ⇤ ) is instantiated according to Sec. 3.1.

Delta Bridge
The delta bridge can be seen as the simplest case where ↵ = 0 or no constraint is imposed. The bridge seeks to minimize E The optimal solution is when the bridge only samples Y ⇤ , thus the Dirac delta distribution is described as follows: This exactly corresponds to MLE, where only examples in the dataset are used to train the generator. We regard this case as our baseline.

Uniform Bridge
The uniform bridge adopts a uniform distribution U (Y ) as constraint. This bridge motivates to include noise into target example, which is similar to label smoothing (Szegedy et al., 2016). The loss function can be written as: We can re-write it as follows by adding a constant to not change the optimization result: This bridge is static for having a closed-form solution: where Z is the partition function. Note that our uniform bridge corresponds to the payoff distribution described in RAML (Norouzi et al., 2016).

Language-model (LM) Bridge
The LM bridge utilizes a pretrained neural language model p LM (Y ) as constraint, which motivates to propose target examples conforming to language fluency.
Similar to the uniform bridge case, we can re-write the loss function to a KL-divergence: Thus, the LM bridge is also static and can be seen as an extension of the uniform bridge, where the exponentiated similarity score is re-weighted by a pretrained LM score, and renormalized: where Z is the partition function. The above equation looks just like the payoff distribution, whereas an additional factor is considered.

Coaching Bridge
The coaching bridge utilizes the generator's output distribution as constraint, which motivates to generate training samples which are easy to be understood by the generator, so as to relieve its learning burden. The coaching bridge follows the same spirit as the coach proposed in Imitation-via-Coaching (He et al., 2012), which, in reinforcement learning vocabulary, advocates to guide the policy (generator) with easy-to-learn action trajectories and let it gradually approach the oracle when the optimal action is hard to achieve.
Since the KL constraint is a moving target when the generator is updated, the coaching bridge should not remain static. Therefore, we perform iterative optimization to train the bridge and the generator jointly. Formally, the derivatives for the coaching bridge are written as follows: The first term corresponds to the policy gradient algorithm described in REINFORCE (Williams, 1992), where the coefficient S(Y, Y ⇤ )/⌧ corresponds to reward function. Due to the mutual dependence between bridge module and generator network, we design an iterative training strategy, i.e. the two networks take turns to update their own parameters treating the other as fixed.

Training
The training of the above three variants is illustrated in Figure 3. Since the proposed bridges can be divided into static ones, which only require pretraining, and dynamic ones, which require continual training with the generator, we describe their training process in details respectively.

Stratified-Sampled Training
Since closed-formed optimal distributions can be found for uniform/LM GBNs, we only need to draw samples from the static bridge distributions to train our sequence generator. Unfortunately, due to the intractability of these bridge distributions, direct sampling is infeasible. Therefore, we follow Norouzi et al. (2016); Ma et al. (2017) and adopt stratified sampling to approximate the direct sampling process. Given a sentence Y ⇤ , we first sample an edit distance m, and then randomly select m positions to replace the original tokens. The difference between the uniform and the LM bridge lies in that the uniform bridge replaces labels by drawing substitutions from a uniform distribution, while LM bridge takes the history as condition and draws substitutions from its step-wise distribution.

Iterative Training
Since the KL-constraint is a moving target for the coaching bridge, an iterative training strategy is designed to alternately update both the generator and the bridge (Algorithm 1). We first pre-train both the generator and the bridge and then start to alternately update their parameters. Figure 4 intuitively demonstrates the intertwined optimization effects over the coaching bridge and the generator. We hypothesize that iterative training with easyto-learn guidance could benefit gradient update, thus result in better local minimum.

Experiment
We select machine translation and abstractive text summarization as benchmarks to verify our GBN framework.

Similarity Score Function
In our experiments, instead of directly using BLEU or ROUGE as reward to guide the bridge network's policy search, we design a simple sur-  Figure 4: Four iterative updates of the coaching bridge and the generator. In an early stage, the pre-trained generator P ✓ may not put mass on some ground truth target points within the output space, shown by (Y ). The coaching bridge is first updated with Equation (14) to locate in between the Dirac delta distribution and the generator's output distribution. Then, by sampling from the coaching bridge for approximating Equation (4), target samples which demonstrate easy-to-learn sequence segments facilitate the generator to be optimized to achieve closeness with the coaching bridge. Then this process repeats until the generator converges.
rogate n-gram matching reward as follows: N n represents the n-gram matching score between Y and Y ⇤ . In order to alleviate reward sparsity at sequence level, we further decompose the global reward S(Y, Y ⇤ ) as a series of local rewards at every time step. Formally, we write the step-wise reward s(y t |y 1:t 1 , Y ⇤ ) as follows: 16) where N (Y,Ỹ ) represents the occurrence of sub-sequenceỸ in whole sequence Y . Specifically, if Algorithm 1 Training Coaching GBN procedure PRE-TRAINING Initialize p ✓ (Y |X) and p ⌘ (Y |Y ⇤ ) with random weights ✓ and ⌘ Pre-train p ✓ (Y |X) to predict Y ⇤ given X Use pre-trained p ✓ (Y |X) to generateŶ given X Update generator via Equation (4) end if end while end procedure a certain sub-sequence y t n+1:t from Y appears less times than in the reference Y ⇤ , y t receives reward. Formally, we rewrite the step-level gradient for each sampled Y as follows:

Machine Translation
Dataset We follow Ranzato et al. (2015); Bahdanau et al. (2016) and select German-English machine translation track of the IWSLT 2014 evaluation campaign. The corpus contains sentencewise aligned subtitles of TED and TEDx talks. We use Moses toolkit (Koehn et al., 2007) and remove sentences longer than 50 words as well as lowercasing. The evaluation metric is BLEU (Papineni et al., 2002) computed via the multi-bleu.perl.
System Setting We use a unified GRU-based RNN (Chung et al., 2014) for both the generator and the coaching bridge. In order to compare with existing papers, we use a similar system setting with 512 RNN hidden units and 256 as embedding size. We use attentive encoder-decoder to build our system . During training, we apply ADADELTA (Zeiler, 2012)  with ✏ = 10 6 and ⇢ = 0.95 to optimize parameters of the generator and the coaching bridge. During decoding, a beam size of 8 is used to approximate the full search space. An important hyper-parameter for our experiments is the temperature ⌧ . For the uniform/LM bridge, we follow Norouzi et al. (2016) to adopt an optimal temperature ⌧ = 0.8. And for the coaching bridge, we test hyper-parameters from ⌧ 2 {0.8, 1.0, 1.2}.

Results
The experimental results are summarized in Table 1. We can observe that our fine-tuned MLE baseline (29.10) is already over-  competing other systems and our proposed GBN can yield a further improvement. We also observe that LM GBN and coaching GBN have both achieved better performance than Uniform GBN, which confirms that better regularization effects are achieved, and the generators become more robust and generalize better. We draw the learning curve of both the bridge and the generator in Figure 5 to demonstrate how they cooperate during training. We can easily observe the interaction between them: as the generator makes progress, the coaching bridge also improves itself to propose harsher targets for the generator to learn.

Abstractive Text Summarization
Dataset We follow the previous works by Rush et al. (2015); Zhou et al. (2017) and use the same corpus from Annotated English Gigaword dataset (Napoles et al., 2012). In order to be comparable, we use the same script 4 released by Rush et al. (2015) to pre-process and extract the training and validation sets. For the test set, we use the English Gigaword, released by Rush et al. (2015), and evaluate our system through ROUGE (Lin, 2004). Following previous works, we employ ROUGE-1, ROUGE-2, and ROUGE-L as the evaluation metrics in the reported experimental results.  System Setting We follow Zhou et al. (2017);Rush et al. (2015) to set input and output vocabularies to 119,504 and 68,883 respectively, and we also set the word embedding size to 300 and all GRU hidden state size to 512. Then we adopt dropout (Srivastava et al., 2014) with probability p = 0.5 strategy in our output layer. We use attention-based sequence-tosequence model  as our baseline and reproduce the results of the baseline reported in Zhou et al. (2017). As stated, the attentive encoder-decode architecture can already outperform existing ABS/ABS+ systems (Rush et al., 2015). In coaching GBN, due to the fact that the input of abstractive summarization X contains more information than the summary target Y ⇤ , directly training the bridge p ⌘ (Y |Y ⇤ ) to understand the generator p ✓ (Y |X) is infeasible. Therefore, we re-design the coaching bridge to receive both source and target input X, Y and we enlarge its vocabulary size to 88,883 to encompass more information about the source side. In Uniform/LM GBN experiments, we also fix the hyper-parameter ⌧ = 0.8 as the optimal setting.

Results
The experimental results are summarized in Table 2. We can observe a significant improvement via our GBN systems. Similarly, the coaching GBN system achieves the strongest performance among all, which again reflects our assumption that more sophisticated regularization can benefit generator's training. We draw the learning curve of the coaching GBN in Figure 6 to demonstrate how the bridge and the generator promote each other.
By introducing different constraints into the bridge module, the bridge distribution will propose different training samples for the generator to learn. From Table 3, we can observe that most samples still reserve their original meaning. The uniform bridge simply performs random replacement without considering any linguistic constraint. The LM bridge strives to smooth reference sentence with high-frequent words. And the coaching bridge simplifies difficult expressions to relieve generator's learning burden. From our experimental results, the more rational and aggressive diversification from the coaching GBN clearly benefits generator the most and helps the generator generalize to more unseen scenarios.
5 Related Literature

Data Augmentation and Self-training
In order to resolve the data sparsity problem in Neural Machine Translation (NMT), many works have been conducted to augment the dataset. The most popular strategy is via self-learning, which incorporates the self-generated data directly into training. Zhang and Zong (2016) and Sennrich et al. (2015) both use self-learning to leverage massive monolingual data for NMT training. Our bridge can take advantage of the parallel training data only, instead of external monolingual ones to synthesize new training data.

Reward Augmented Maximum Likelihood
Reward augmented maximum likelihood or RAML (Norouzi et al., 2016) proposes to integrate task-level reward into MLE training by using an exponentiated payoff distribution. KL divergence between the payoff distribution and the generator's output distribution are minimized to achieve an optimal task-level reward. Following this work, Ma et al. (2017) introduces softmax Q-Distribution to interpret RAML and reveals its relation with Bayesian decision theory. These two works both alleviate data sparsity problem by augmenting target examples based on the ground truth. Our method draws inspiration from them but seeks to propose the more general Generative Bridging Network, which can transform the ground truth into different bridge distributions, from where samples are drawn will account for different interpretable factors.

Coaching
Our coaching GBN system is inspired by imitation learning by coaching (He et al., 2012). Instead of directly behavior cloning the oracle, they advocate learning hope actions as targets from a coach which is interpolated between learner's policy and the environment loss. As the learner makes progress, the targets provided by the coach will become harsher to gradually improve the learner. Similarly, our proposed coaching GBN is motivated to construct an easy-to-learn bridge distribution which lies in between the ground truth and the generator. Our experimental results confirm its effectiveness to relieve the learning burden.

Conclusion
In this paper, we present the Generative Bridging Network (GBN) to overcome data sparsity and overfitting issues with Maximum Likelihood Estimation in neural sequence prediction. Our implemented systems prove to significantly improve the performance, compared with strong baselines. We believe the concept of bridge distribution can be applicable to a wide range of distribution matching tasks in probabilistic learning. In the future, we intend to explore more about GBN's applications as well as its provable computational and statistical guarantees.