Regularized Context Gates on Transformer for Machine Translation

Context gates are effective to control the contributions from the source and target contexts in the recurrent neural network (RNN) based neural machine translation (NMT). However, it is challenging to extend them into the advanced Transformer architecture, which is more complicated than RNN. This paper first provides a method to identify source and target contexts and then introduce a gate mechanism to control the source and target contributions in Transformer. In addition, to further reduce the bias problem in the gate mechanism, this paper proposes a regularization method to guide the learning of the gates with supervision automatically generated using pointwise mutual information. Extensive experiments on 4 translation datasets demonstrate that the proposed model obtains an averaged gain of 1.0 BLEU score over a strong Transformer baseline.


Introduction
An essence to modeling translation is how to learn an effective context from a sentence pair.Statistical machine translation (SMT) models the source context from the source-side of a translation model and models the target context from a target-side language model (Koehn et al., 2003;Koehn, 2009;Chiang, 2005).These two models are trained independently.On the contrary, neural machine translation (NMT) advocates a unified manner to jointly learn source and target context using an encoder-decoder framework with an attention mechanism, leading to substantial gains over SMT in translation quality (Sutskever et al., 2014;Bahdanau et al., 2014;Gehring et al., 2017;Vaswani et al., 2017).Prior work on attention mechanism (Luong et al., 2015;Liu et al., 2016;Mi et al., 2016;Chen et al., 2018;Li et al.,   former obtain an unfaithful translation by wrongly translate "t ī qíu" into "play golf" because referring too much target context.By regularizing the context gates, the purposed method corrects the translation of "t ī qíu" into "play soccer".The light font denotes the target words to be translated in the future.For original Transformer, the source and target context are added directly without any rebalancing.Elbayad et al., 2018) have shown a better source context representation is helpful to translation performance.However, a standard NMT system is incapable of effectively controlling the contributions from source and target contexts (He et al., 2018) to deliver highly adequate translations as shown in Figure 1.As a result, Tu et al. (2017) carefully designed context gates to dynamically control the influence from source and target contexts and observed significant improvements in the recurrent neural network (RNN) based NMT.Although Transformer (Vaswani et al., 2017) delivers significant gains over RNN for translation, but there are still one third translation errors related with context control problem as described in Section 3.3.Obviously, it is feasible to extend the con-arXiv:1908.11020v1[cs.CL] 29 Aug 2019 text gates in RNN based NMT into Transformer, but an obstacle to accomplish this goal is the complicated architecture in Transformer, where source and target words are tightly coupled.Thus, it is challenging to put context gates into practice in Transformer.
In this paper, under the Transformer architecture, we firstly provide a way to define the source and target contexts and then obtain our model by combining both source and target contexts with context gates, which actually induces a probabilistic model indicating whether the next generated word is contributed from the source or target sentence (Li et al., 2019).In our preliminary experiments, this model only achieves modest gains over Transformer, because the context selection error reduction are very limited as described in Section 3.3.To further address this issue, we propose a probabilistic model whose loss function is derived from external supervision as regularization for the context gates.This probabilistic model is jointly trained with the context gates in NMT.As it is too costly to manually annotate this supervision for a large-scale training corpus, we instead propose a simple yet effective method to automatically generate supervision using pointwise mutual information, inspired by word collocation (Bouma, 2009).In this way, the resulting NMT model is capable of controlling the contributions from source and target contexts effectively.
We conduct extensive experiments on 4 benchmark datasets, and experimental results demonstrate that the proposed gated model obtains an averaged improvement of 1.0 BLEU point over corresponding strong Transformer baselines.In addition, we design a novel analysis to show that the improvement of translation performance is indeed caused by relieving the problem of wrongly focusing on source or target context.

Methodology
Given a source sentence x = x 1 , • • • , x |x| and a target sentence y = y 1 , • • • , y |y| , our proposed model is defined by the following conditional probability under the Transformer architec-ture:1 where y <i = y 1 , . . ., y i−1 denotes a prefix of y with length i − 1, and c L i denotes the L th layer context in the decoder with L layers which is obtained from the representation of y <i and h L , i.e., the top layer hidden representation of x, similar to the original Transformer.To finish the overall definition of our model in equation 1, we will expand the definition c L i based on context gates in the following subsections.

Context Gated Transformer
To develop context gates for our model, it is necessary to define the source and target contexts at first.Unlike the case in RNN, the source sentence x and the target prefix y <i are tightly coupled in our model, and thus it is not trivial to define the source and target contexts.
Suppose the source and target contexts at each layer l are denoted by s l i and t l i .We recursively define them from c l−1 <i as follows.2 where • is functional composition, att (q, kv) denotes multiple head attention with q as query, k as key, v as value, and rn as a residual network (He et al., 2016), ln is layer normalization (Ba et al., 2016), and all parameters are removed for simplicity.
In order to control the contributions from source or target side, we define c l i by introducing a context gate z l i to combine s l i and t l i as following: where ff denotes a feedforward neural network, denotes concatenation, σ(•) denotes a sigmoid function, and ⊗ denotes an element-wise multiplication.z l i is a vector (Tu et al. (2017) reported that a gating vector is better than a gating scalar).
Note that each component in z l i actually induces a probabilistic model indicating whether the next generated word y i is mainly contributed from the source (x) or target sentence (y <i ), as shown in Figure 1.
Remark It is worth mentioning that our proposed model is similar to the standard Transformer with boiling down to replacing a residual connection with a high way connection (Srivastava et al., 2015): if we replace (1 − z l i ) ⊗ t l i + z l i ⊗ s l i in equation 3 by t l i + s l i , the proposed model is reduced to Transformer.

Regularization of Context Gates
In our preliminary experiments, we found learning context gates from scratch cannot effectively reduce the context selection errors as described in Section 3.3.
To address this issue, we propose a regularization method to guide the learning of context gates by external supervision z * i which is a binary number representing whether y i is contributed from either source (z * i = 1) or target sentence (z * i = 0).Formally, the training objective is defined as follows: where z l i is a context gate defined in equation 4 and λ is a hyperparameter to be tuned in experiments.Note that we only regularize the gates during the training, but we skip the regularization during inference.
Because golden z * i are inaccessible for each word y i in the training corpus, we ideally have to manually annotate it.However, it is costly for human to label such a large scale dataset.Instead, we propose an automatic method to generate its value in practice in the next subsection.

Generating Supervision z * i
To decide whether y i is contributed from the source (x) or target sentence (y <i ) (Li et al., 2019), a metric to measure the correlation between a pair of words ( y i , x j or y i , y k for k < i) is first required.This is closely related to a wellstudied problem, i.e., word collocation (Liu et al., 2009), and we simply employ the pointwise mutual information (PMI) to measure the correlation between a word pair µ, ν following Bouma (2009): pmi (µ, ν) = log P (µ,ν) where C (µ) and C (ν) are word counts, C (µ, ν) is the co-occurrence count of words µ and ν, and Z is the normalizer, i.e., the total number of all possible (µ, ν) pairs.To obtain the context gates, we define two types of PMI according to different C (µ, ν) including two scenarios as follows.
PMI in the Bilingual Scenario For each parallel sentence pair x, y in training set, C (y i , x j ) is added by one if both y i ∈ y and x j ∈ x.

PMI in the Monolingual Scenario
In translation scenario, only the words in the preceding context of a target word should be considered.So for any target sentence y in training set, C (y i , y k ) is added by one if both y i ∈ y and y k ∈ y <i .
Given the two kinds of PMI for a bilingual sentence x, y , each z * i for each y i is defined as follows, z * i = 1 max j pmi(y i ,x j )>max k<i pmi(y i ,y k ) , (7) where 1 b is a binary function valued by 1 if b is true and 0 otherwise.In equation 7, we employ max strategy to measure the correlation between y i and a sentence (x or y <i ).Indeed, it is similar to use the average strategy, but we did not find its gains over max in our experiments.

Experiments
The proposed methods are evaluated on NIST ZH⇒EN3 , WMT14 EN⇒DE4 , IWSLT14 DE⇒EN5 and IWSLT17 FR⇒EN6 tasks.To make our NMT models capable of openvocabulary translation, all datasets are preprocessed with Byte Pair Encoding (Sennrich et al., 2015).All proposed methods are implemented on top of Transformer (Vaswani et al., 2017) which is the state-of-the-art NMT system.Case-insensitive BLEU score (Papineni et al., 2002)

Tuning Regularization Coefficient
In the beginning of our experiments, we tune the regularization coefficient λ on the DE⇒EN task.Table 2 shows the robustness of λ, because the translation performance only fluctuates slightly over various λ.In particular, the best performance is achieved when λ = 1, which is the default setting throughout this paper.

Translation Performance
Table 1 shows the translation quality of our methods in BLEU.Our observations are as follows: 1) The performance of our implementation of Transformer is slightly higher than Vaswani et al. (2017), which indicates we are in fair comparison.
2) The proposed Context Gates achieves modest improvement over the baseline.As we mentioned in Section 2.1, the structure of RNN based NMT is quite different from Transformer.Therefore, naively introducing the gate mechanism to Transformer without adaptation does not obtain similar gains as it does in RNN based NMT.
3) The proposed Regularized Context Gates improves nearly 1.0 BLEU score over the baseline and outperforms all existing related work.This indicates that the regularization can make context gates more effective on relieving the context control problem as discussed following.

Error Analysis
To explain the success of Regularized Context Gates, we analyze the error rates of translation and context selection.Given a sentence pair x and y, the forced decoding translation error is defined as P (y i | y <i , x) < P (ŷ i | y <i , x), where ŷi arg max v P (v | y <i , x) and v denotes any token in the vocabulary.The context selection error is defined as z * i (y i ) = z * i (ŷ i ), where z * i is defined in equation 7. Note that a context selection error must be a translation error but the opposite is not true.The example shown in Figure 1

Conclusions
This paper transplants context gates from RNN based NMT to Transformer to control the source and target context for translation.We find that context gates only modestly improves the translation quality of Transformer, because learning context gates freely from scratch is more challenging for Transformer with the complicated structure than for RNN.Based on this observation, we propose a regularization method to guide the learning of context gates with an effective way to generate supervision from training data.Experimental results show regularized context gates can significantly improve translation performances over different translation tasks even though the context control problem is only slightly relieved.In the future, we believe more work on alleviating context control problem has potential to improve translation performance as quantified in Table 3.

A Details of Data and Implementation
The training data for ZH⇒EN task consists of 1.8M sentence pairs.The development set is chosen as NIST02 and test sets are NIST05, 06, 08.For EN⇒DE task, its training data contains 4.6M sentences pairs.Both FR⇒EN and DE⇒EN tasks contain around 0.2M sentence pairs.For ZH⇒EN and EN⇒DE tasks, the joint vocabulary is built with 32K BPE merge operations, and for DE⇒EN and FR⇒EN tasks it is built with 16K merge operations.
Our implementation of context gates and the regularization are based on Transformer, implemented by THUMT (Zhang et al., 2017).For ZH⇒EN and EN⇒DE tasks, only the sentences of length up to 256 tokens are used with no more than 2 15 tokens in a batch.The dimension of both word embeddings and hidden size are 512.Both encoder and decoder have 6 layers and adopt multi-head attention with 8 heads.For FR⇒EN and DE⇒EN tasks, we use a smaller model with 4 layers and 4 heads, and both the embedding size and the hidden size is 256.The training batch contains no more than 2 12 tokens.For all tasks, the beam size for decoding is 4, and the loss function is optimized with Adam, where β 1 = 0.9, β 2 = 0.98 and = 10 −9 .

C Regularization in Different Layers
To investigate the sensitivity of choosing different layers for regularization, we only regularize context gate in each single layer.Table 5 shows that there is no significant performance difference, but all single layer regularized context gate models are slightly inferior to the model which regularizes all the gates.Moreover, since nearly no computation overhead is introduced and for design simplicity, we adopt regularizing all the layers.

D Effects on Long Sentences
In Tu et al. (2017), context gates alleviate the problem of long sentence translation of attentional RNN based system (Bahdanau et al., 2014).We follow Tu et al. (2017) and compare the translation performances according to different lengths of sentence.As shown in Figure 2, we find Context Gates does not improve the translation of long sentences, but translate short sentences better.Fortunately, the Regularized Context Gates indeed significantly improves the translation for both short sentences and long sentences.

Figure 1 :
Figure1: A running example to raise the context control problem.Both original and context gated Transformer obtain an unfaithful translation by wrongly translate "t ī qíu" into "play golf" because referring too much target context.By regularizing the context gates, the purposed method corrects the translation of "t ī qíu" into "play soccer".The light font denotes the target words to be translated in the future.For original Transformer, the source and target context are added directly without any rebalancing.

*
Results are measured on DE⇒EN task.

Figure 2 :
Figure 2: Translation performance on NIST08 test set with respect to different lengths of source sentence.Regularized Context Gates significantly improves the translation of short and long sentences.
2018; * Work done while X.Li interning at Tencent AI Lab.L. Liu is the corresponding author.
i I often play golf with my colleagues .I often play soccer with my colleagues .
* Results are measured on DE⇒EN task.

Table 2 :
Translation performance over different regularization coefficient λ.
also demonstrates a context selection error indicating the translation error is related with the bad context selection.
* Results are measured on NIST08 of ZH⇒EN task.

Table 3 :
Forced decoding translation error rate (FER), context selection error rate (CER) and the proportion of context selection errors over forced decoding translation errors (CE/FE) of the original and context gated Transformer with or without regularization.As shown inTable 3, the Regularized Context Gates significantly reduce the translation error by avoiding the context selection error.The Context Gates are also able to avoid few context selection error but cannot make a notable improvement in translation performance.It is worth to note that there are approximately one third translation error is related with context selection error.The Regularized Context Gates indeed alleviate this serious problem by effectively rebalancing of source and target context for translation.

Table 4 :
Mean and variance of context gates

Table 5 :
Regularize context gates on different layers."N/A"indicates regularization is not added."ALL" indicates regularization is added to all the layers.