From Credit Assignment to Entropy Regularization: Two New Algorithms for Neural Sequence Prediction

In this work, we study the credit assignment problem in reward augmented maximum likelihood (RAML) learning, and establish a theoretical equivalence between the token-level counterpart of RAML and the entropy regularized reinforcement learning. Inspired by the connection, we propose two sequence prediction algorithms, one extending RAML with fine-grained credit assignment and the other improving Actor-Critic with a systematic entropy regularization. On two benchmark datasets, we show the proposed algorithms outperform RAML and Actor-Critic respectively, providing new alternatives to sequence prediction.

Despite the distinct evaluation metrics for the aforementioned tasks, the standard training algorithm has been the same for all of them. Specifically, the algorithm is based on maximum likelihood estimation (MLE), which maximizes the log- * Equal contribution. likelihood of the "ground-truth" sequences empirically observed. 1 While largely effective, the MLE algorithm has two obvious weaknesses. Firstly, the MLE training ignores the information of the task specific metric. As a result, the potentially large discrepancy between the log-likelihood during training and the task evaluation metric at test time can lead to a suboptimal solution. Secondly, MLE can suffer from the exposure bias, which refers to the phenomenon that the model is never exposed to its own failures during training, and thus cannot recover from an error at test time. Fundamentally, this issue roots from the difficulty in statistically modeling the exponentially large space of sequences, where most combinations cannot be covered by the observed data.
To tackle these two weaknesses, there have been various efforts recently, which we summarize into two broad categories: • A widely explored idea is to directly optimize the task metric for sequences produced by the model, with the specific approaches ranging from minimum risk training (MRT) (Shen et al., 2015) and learning as search optimization (LaSO) (Daumé III and Marcu, 2005;Wiseman and Rush, 2016) to reinforcement learning (RL) (Ranzato et al., 2015;Bahdanau et al., 2016). In spite of the technical differences, the key component to make these training algorithms practically efficient is often a delicate credit assignment scheme, which transforms the sequence-level signal into dedicated smaller units (e.g., token-level or chunk-level), and allocates them to specific decisions, allowing for efficient optimization with a much lower variance. For instance, the beam search optimiza-tion (BSO) (Wiseman and Rush, 2016) utilizes the position of margin violations to produce signals to the specific chunks, while the actor-critic (AC) algorithm (Bahdanau et al., 2016) trains a critic to enable token-level signals.
• Another alternative idea is to construct a task metric dependent target distribution, and train the model to match this task-specific target instead of the empirical data distribution. As a typical example, the reward augmented maximum likelihood (RAML) (Norouzi et al., 2016) defines the target distribution as the exponentiated pay-off (sequence-level reward) distribution. This way, RAML not only can incorporate the task metric information into training, but it can also alleviate the exposure bias by exposing imperfect outputs to the model. However, RAML only works on the sequence-level training signal.
In this work, we are intrigued by the question whether it is possible to incorporate the idea of fine-grained credit assignment into RAML. More specifically, inspired by the token-level signal used in AC, we aim to find the token-level counterpart of the sequence-level RAML, i.e., defining a token-level target distribution for each autoregressive conditional factor to match. Motived by the question, we first formally define the desiderata the token-level counterpart needs to satisfy and derive the corresponding solution ( §2). Then, we establish a theoretical connection between the derived token-level RAML and entropy regularized RL ( §3). Motivated by this connection, we propose two algorithms for neural sequence prediction, where one is the token-level extension to RAML, and the other a RAML-inspired improvement to the AC ( §4). We empirically evaluate the two proposed algorithms, and show different levels of improvement over the corresponding baseline. We further study the importance of various techniques used in our experiments, providing practical suggestions to readers ( §6).

Token-level Equivalence of RAML
We first introduce the notations used throughout the paper. Firstly, capital letters will denote random variables and lower-case letters are the values to take. As we mainly focus on conditional sequence prediction, we use x for the conditional input, and y for the target sequence. With y denoting a sequence, y j i then denotes the subsequence from position i to j inclusively, while y t denotes the single value at position t. Also, we use |y| to indicate the length of the sequence. To emphasize the ground-truth data used for training, we add superscript * to the input and target, i.e., x * and y * . In addition, we use Y to denote the set of all possible sequences with one and only one eos symbol at the end, and W to denote the set of all possible symbols in a position. Finally, we assume length of sequences in Y is bounded by T .

Background: RAML
As discussed in §1, given a ground-truth pair (x * , y * ), RAML defines the target distribution using the exponentiated pay-off of sequences, i.e., where R(y; y * ) is the sequence-level reward, such as BLEU score, and τ is the temperature hyperparameter controlling the sharpness. With the definition, the RAML algorithm simply minimizes the cross entropy (CE) between the target distribution and the model distribution Note that, this is quite similar to the MLE training, except that the target distribution is different. With the particular choice of target distribution, RAML not only makes sure the ground-truth reference remains the mode, but also allows the model to explore sequences that are not exactly the same as the reference but have relatively high rewards. Compared to algorithms trying to directly optimize task metric, RAML avoids the difficulty of tracking and sampling from the model distribution that is consistently changing. Hence, RAML enjoys a much more stable optimization without the need of pretraining. However, in order to optimize the RAML objective (Eqn. (2)), one needs to sample from the exponentiated pay-off distribution, which is quite challenging in practice. Thus, importance sampling is often used (Norouzi et al., 2016;Ma et al., 2017). We leave the details of the practical implementation to Appendix B.1.

Token-level Target Distribution
Despite the appealing properties, RAML only operates on the sequence-level reward. As a result, the reward gap between any two sequences cannot be attributed to the responsible decisions precisely, which often leads to a low sample efficiency. Ideally, since we rely on the auto-regressive factorization P θ (y | x * ) = |y| t=1 P θ (y t | y t−1 1 , x * ), the optimization would be much more efficient if we have the target distribution for each token-level factor P θ (Y t | y t−1 1 , x * ) to match. Conceptually, this is exactly how the AC algorithm improves upon the vanilla sequence-level REINFORCE algorithm (Ranzato et al., 2015).
With this idea in mind, we set out to find such a token-level target. Firstly, we assume the tokenlevel target shares the form of a Boltzmann distribution but parameterized by some unknown negative energy function Q R , i.e., 2 Intuitively, Q R (y t−1 1 , w; y * ) measures how much future pay-off one can expect if w is generated, given the current status y t−1 1 and the reference y * . This quantity highly resembles the action-value function (Q-function) in reinforcement learning. As we will show later, it is indeed the case.
Before we state the desiderata for Q R , we need to extend the definition of R in order to evaluate the goodness of an unfinished partial prediction, i.e., sequences without an eos suffix. Let Y − be the set of unfinished sequences, following Bahdanau et al. (2016), we define the pay-off function R for a partial sequenceŷ ∈ Y − , |ŷ| < T as R(ŷ; y * ) = R(ŷ + eos; y * ), where the + indicates string concatenation.
With the extension, we are ready to state two requirements for Q R : 1. Marginal match: For P Q R to be the token-level equivalence of P R , the sequence-level marginal distribution induced by P Q R must match P R , i.e., for any y ∈ Y, |y| t=1 P Q R (y t | y t−1 1 ) = P R (y).
Note that there are infinitely many Q R 's satisfying Eqn. (5), because adding any constant value to Q R does not change the Boltzmann distribution, known as shift-invariance w.r.t. the energy.

2.
Terminal condition: Secondly, let's consider the value of Q R when emitting an eos symbol to immediately terminate the generation. As mentioned earlier, Q R measures the expected future pay-off. Since the emission of eos ends the generation, the future pay-off can only come from the immediate increase of the pay-off. Thus, we require Q R to be the incremental pay-off when producing eos, i.e.
Based on the two requirements, we can derive the form Q R , which is summarized by Proposition 1. Proposition 1. P Q R and Q R satisfy requirements (5) and (6) if and only if for any ground-truth pair (x * , y * ) and any sequence prediction y ∈ Y, when t < |y|, and otherwise, i.e., when t = |y| Note that, instead of giving an explicit form for the token-level target distribution, Proposition 1 only provides an equivalent condition in the form of an implicit recursion. Thus, we haven't obtained a practical algorithm yet. However, as we will discuss next, the recursion has a deep connection to entropy regularized RL, which ultimately inspires our proposed algorithms.

Connection to Entropy-regularized RL
Before we dive into the connection, we first give a brief review of the entropy-regularized RL. For an in-depth treatment, we refer readers to (Ziebart, 2010;Schulman et al., 2017).

Background: Entropy-regularized RL
Following the standard convention of RL, we denote a Markov decision process (MDP) by a tuple M = (S, A, p s , r, γ), where S, A, p s , r, γ are the state space, action space, transition probability, reward function and discounting factor respectively. 3 Based on the notation, the goal of entropyregularized RL augments is to learn a policy π(a t | s t ) which maximizes the discounted expected future return and causal entropy (Ziebart, 2010), i.e., where H denotes the entropy and α is a hyperparameter controlling the relative importance between the reward and the entropy. Intuitively, compared to standard RL, the extra entropy term encourages exploration and promotes multi-modal behaviors. Such properties are highly favorable in a complex environment. Given an entropy-regularized MDP, for any fixed policy π, the state-value function V π (s) and the action-value function Q π can be defined as With the definitions above, it can further be proved (Ziebart, 2010;Schulman et al., 2017) that the optimal state-value function V * , the actionvalue function Q * and the corresponding optimal policy π * satisfy the following equations Here, Eqn. (10) and (11) are essentially the entropy-regularized counterparts of the optimal Bellman equations in standard RL. Following previous literature, we will refer to Eqn. (10) and (11) as the optimal soft Bellman equations, and the V * and Q * as optimal soft value functions.

An RL Equivalence of the Token-level RAML
To reveal the connection, it is convenient to define the incremental pay-off and the last term of Eqn. (7) as Substituting the two definitions into Eqn. (7), the recursion simplifies as Now, it is easy to see that the Eqn. (14) and (15), which are derived from the token-level RAML, highly resemble the optimal soft Bellman equations (10) and (11) in entropy-regularized RL. The following Corollary formalizes the connection. Corollary 1. For any ground-truth pair (x * , y * ), the recursion specified by Eqn. (13), (14) and (15) is equivalent to the optimal soft Bellman equation of a "deterministic" MDP in entropy-regularized reinforcement learning, denoted as M R , where • the action space A corresponds to W, • the transition probability ρ s is a deterministic process defined by string concatenation • the reward function r corresponds to the incremental pay-off defined in Eqn. (13), • the discounting factor γ = 1, • the entropy hyper-parameter α = τ , • and a period terminates either when eos is emitted or when its length reaches T and we enforce the generation of eos.
Moreover, the optimal soft value functions V * and Q * of the MDP exactly match the V R and Q R defined by Eqn. (14) and (15) respectively. The optimal policy π * is hence equivalent to the tokenlevel target distribution P Q R .
The connection established by Corollary 1 is quite inspiring: • Firstly, it provides a rigorous and generalized view of the connection between RAML and entropy-regularized RL. In the original work, Norouzi et al. (2016) point out RAML can be seen as reversing the direction of KL (P θ P R ), which is a sequence-level view of the connection. Now, with the equivalence between the token-level target P Q R and the optimal Q * , it generalizes to matching the future action values consisting of both the reward and the entropy.
• Secondly, due to the equivalence, if we solve the optimal soft Q-function of the corresponding MDP, we directly obtain the token-level target distribution. This hints at a practical algorithm with token-level credit assignment.
• Moreover, since RAML is able to improve upon MLE by injecting entropy, the entropyregularized RL counterpart of the standard AC algorithm should also lead to an improvement in a similar manner.

Proposed Algorithms
In this section, we explore the insights gained from Corollary 1 and present two new algorithms for sequence prediction.

Value Augmented Maximum Likelihood
The first algorithm we consider is the token-level extension of RAML, which we have been discussing since §2. As mentioned at the end of §2.2, Proposition 1 only gives an implicit form of Q R , and so is the token-level target distribution P Q R (Eqn. (3)). However, thanks to Corollary 1, we now know that Q R is the same as the optimal soft action-value function Q * of the entropyregularized MDP M R . Hence, by finding the Q * , we will have access to P Q R . At the first sight, it seems recovering Q * is as difficult as solving the original sequence prediction problem, because solving Q * from the MDP is essentially the same as learning the optimal policy for sequence prediction. However, it is not true because Q R (i.e., P Q R ) can condition on the correct reference y * . In contrast, the model distribution P θ can only depend on x * . Therefore, the function approximator trained to recover Q * can take y * as input, making the estimation task much easier. Intuitively, when recovering Q * , we are trying to train an ideal "oracle", which has access to the ground-truth reference output, to decide the best behavior (policy) given any arbitrary (good or not) state.
Thus, following the reasoning above, we first train a parametric function approximator Q φ to search the optimal soft action value. In this work, for simplicity, we employ the Soft Qlearning algorithm (Schulman et al., 2017) to perform the policy optimization. In a nutshell, Soft Q-Learning is the entropy-regularized version of Q-Learning, an off-policy algorithm which minimizes the mean squared soft Bellman residual according to Eqn. (11). Specifically, given groundtruth pair (x * , y * ), for any trajectory y ∈ Y, the training objective is is the one-step look-ahead target Q-value, and V φ (y t 1 ; y * ) = τ log w∈W exp Q φ (y t 1 , w; y * )/τ as defined in Eqn. (10). In the recent instantiation of Q-Learning (Mnih et al., 2015), to stabilize training, the target Q-value is often estimated by a separate slowly updated target network. In our case, as we have access to a significant amount of reference sequences, we find the target network not necessary. Thus, we directly optimize Eqn. (16) using gradient descent, and let the gradient flow through both Q φ (y t−1 1 , y t ; y * ) and V φ (y t 1 ; y * ) (Baird, 1995). After the training of Q φ converges, we fix the parameters of Q φ , and optimize the cross en- Compared to the of objective of RAML in Eqn. (2), having access to P Q φ (Y t | y t−1 1 ) allows us to provide a distinct token-level target for each conditional factor P θ (Y t | y t−1 1 ) of the model. While directly sampling from P R is practically infeasible ( §2.1), having a parametric target distribution P Q φ makes it theoretically possible to sample from P Q φ and perform the optimization. However, empirically, we find the samples from P Q φ are not diverse enough ( §6). Hence, we fall back to the same importance sampling approach (see Appendix B.2) as used in RAML.
Finally, since the algorithm utilizes the optimal soft action-value function to construct the tokenlevel target, we will refer to it as value augmented maximum likelihood (VAML) in the sequel.

Entropy-regularized Actor Critic
The second algorithm follows the discussion at the end of §3.2, which is essentially an actor-critic algorithm based on the entropy-regularized MDP in Corollary 1. For this reason, we name the algorithm entropy-regularized actor critic (ERAC). As with standard AC algorithm, the training process interleaves the evaluation of current policy using the parametric critic Q φ and the optimization of the actor policy π θ given the current critic.
Critic Training. The critic is trained to perform policy evaluation using the temporal difference learning (TD), which minimizes the TD error where the TD targetQφ is constructed based on fixed policy iteration in Eqn. (9), i.e., It is worthwhile to emphasize that the objective (18) trains the critic Q φ to evaluate the current policy. Hence, it is entirely different from the objective (16), which is performing policy optimization by Soft Q-Learning. Also, the trajectories y used in (18) are sequences drawn from the actor policy π θ , while objective (16) theoretically accepts any trajectory since Soft Q-Learning can be fully offpolicy. 5 Finally, following Bahdanau et al. (2016), the TD targetQφ in Eqn. (9) is evaluated using a target network, which is indicated by the bar sign above the parameters, i.e.,φ. The target network is slowly updated by linearly interpolating with the up-to-date network, i.e., the update is . We also adapt another technique proposed by Bahdanau et al. (2016), which smooths the critic by minimizing the "variance" of Q-values, i.e., is the mean Q-value, and λ var is a hyper-parameter controlling the relative weight between the TD loss and the smooth loss.
Actor Training. Given the critic Q φ , the actor gradient (to maximize the expected return) is given by the policy gradient theorem of the entropyregularized RL (Schulman et al., 2017), which has the form Here, for each step t, we follow Bahdanau et al. , we find it necessary to first pretrain the actor using MLE and then pretrain the critic before the actorcritic training. Also, to prevent divergence during actor-critic training, it is helpful to continue performing MLE training along with Eqn. (20), though using a smaller weight λ mle .

Related Work
Task Loss Optimization and Exposure Bias Apart from the previously introduced RAML, BSO, Actor-Critic ( §1), MIXER (Ranzato et al., 2015) also utilizes chunk-level signals where the length of chunk grows as training proceeds. In contrast, minimum risk training (Shen et al., 2015) directly optimizes sentence-level BLEU. As a result, it requires a large number (100)

Experiment Settings
In this work, we focus on two sequence prediction tasks: machine translation and image captioning. Due to the space limit, we only present the information necessary to compare the empirical results at this moment. For a more detailed description, we refer readers to Appendix B and the code 6 . For all algorithms, the sequence-level BLEU score is employed as the pay-off function R, while the corpus-level BLEU score (Papineni et al., 2002) is used for the final evaluation. The sequence-level BLEU score is scaled up by the sentence length so that the scale of the immediate reward at each step is invariant to the length. For training, each image-caption pair is treated as an i.i.d. sample, and sequence-level BLEU score is used as the pay-off. For testing, the standard multi-reference BLEU4 is used.

Comparison with the Direct Baseline
Firstly, we compare ERAC and VAML with their corresponding direct baselines, namely AC (Bahdanau et al., 2016) and RAML (Norouzi et al., 2016) respectively. As a reference, the performance of MLE is also provided.
Due to non-neglected performance variance observed across different runs, we run each algorithm for 9 times with different random seeds, 7 and report the average performance, the standard deviation and the performance range (min, max).

Machine Translation
The results on MT are summarized in the left half of Tab. 1. Firstly, all four advanced algorithms significantly outperform the MLE baseline. More importantly, both VAML and ERAC improve upon their direct baselines, RAML and AC, by a clear margin on average. The result suggests the two proposed algorithms both well combine the benefits of a delicate credit assignment scheme and the entropy regularization, achieving improved performance.

Image Captioning
The results on image captioning are shown in the right half of Tab. 1. Despite the similar overall trend, the improvement of VAML over RAML is smaller compared to that in MT. Meanwhile, the improvement from AC to ERAC becomes larger in comparison. We suspect this is due to the multi-reference nature of the MSCOCO dataset, where a larger entropy is preferred. As a result, the explicit entropy regularization in ERAC becomes immediately fruitful. On the other hand, with multiple references, it can be more difficult to learn a good oracle Q * (Eqn. (15)). Hence, the token-level target can be less accurate, resulting in smaller improvement.

Comparison with Existing Work
To further evaluate the proposed algorithms, we compare ERAC and VAML with the large body of existing algorithms evaluated on IWSTL 2014. As a note of caution, previous works don't employ the exactly same architectures (e.g. number of layers, hidden size, attention type, etc.). Despite that, for VAML and ERAC, we use an architecture that is most similar to the majority of previous works, which is the one described in §6.1 with input feeding. Based on the setting, the comparison is summarized in Table 2

Ablation Study
Due to the overall excellence of ERAC, we study the importance of various components of it, hopefully offering a practical guide for readers. As the input feeding technique largely slows down the training, we conduct the ablation based on the model variant without input feeding. Firstly, we study the importance of two techniques aimed for training stability, namely the target network and the smoothing technique ( §4.2). Based on the MT task, we vary the update speed β of the target critic, and the λ var , which controls the  • Comparing the two rows of Tab. 3, the smoothing technique consistently leads to performance improvement across all values of τ . In fact, removing the smoothing objective often causes the training to diverge, especially when β = 0.01 and 1. But interestingly, we find the divergence does not happen if we update the target network a little bit faster (β = 0.1) or quite slowly (β = 0.001).
• In addition, even with the smoothing technique, the target network is still necessary. When the target network is not used (β = 1), the performance drops below the MLE baseline. However, as long as a target network is employed to ensure the training stability, the specific choice of target network update rate does not matter as much. Empirically, it seems using a slower (β = 0.001) update rate yields the best result.
Next, we investigate the effect of enforcing different levels of entropy by varying the entropy hyper-parameter τ . As shown in Fig. 1, it seems there is always a sweet spot for the level of entropy. On the one hand, posing an over strong en- tropy regularization can easily cause the actor to diverge. Specifically, the model diverges when τ reaches 0.03 on the image captioning task or 0.06 on the machine translation task. On the other hand, as we decrease τ from the best value to 0, the performance monotonically decreases as well. This observation further verifies the effectiveness of entropy regularization in ERAC, which well matches our theoretical analysis.
Finally, as discussed in §4.2, ERAC takes the effect of future entropy into consideration, and thus is different from simply adding an entropy term to the standard policy gradient as in A3C (Mnih et al., 2016). To verify the importance of explicitly modeling the entropy from future steps, we compared ERAC with the variant that only applies the entropy regularization to the actor but not to the critic. In other words, the τ is set to 0 when performing policy evaluating according to Eqn. (4.2), while the τ for the entropy gradient in Eqn. (20) remains. The comparison result based on 9 runs on test set of IWSTL 2014 is shown in Table 4. As we can see, simply adding a local entropy gradient does not even improve upon the AC. This further verifies the difference between ERAC and A3C, and shows the importance of taking future entropy into consideration.

Algorithm
Mean Max

Discussion
In this work, motivated by the intriguing connection between the token-level RAML and the entropy-regularized RL, we propose two algorithms for neural sequence prediction. Despite the distinct training procedures, both algorithms combine the idea of fine-grained credit assignment and the entropy regularization, leading to positive empirical results. However, many problems remain widely open. In particular, the oracle Q-function Q φ we obtain is far from perfect. We believe the ground-truth reference contains sufficient information for such an oracle, and the current bottleneck lies in the RL algorithm. Given the numerous potential applications of such an oracle, we believe improving its accuracy will be a promising future direction.

A Proofs
A.1 Main Proofs Proposition 1. For any ground-truth pair (x * , y * ), P Q R and Q R satisfy the following marginal match condition and terminal condition: if and only if for any y ∈ Y, Proof. To avoid clutter, we drop the dependency on x * and y * . The following proof holds for each possible pair of (x * , y * ). Firstly, it is easy to see that the terminal condition in Eqn. (22) exactly corresponds to the t = |y| case of Eqn. (23), since y t = eos for y ∈ Y. So, we will focus on the non-terminal case next.
Sufficiency For convenience, define V R (y t 1 ) = τ log w∈W exp Q R (y t 1 , w)/τ . Suppose Eqn. (23) is true. Then for any y ∈ Y, where V R (∅) denotes V R (y t 1 ) when t = 0 and y t 1 is an empty set. Since P Q R (y) is a valid distribution by construction, we have Hence, which satisfies the marginal match requirement.
Necessity Now, we show that the specific formulation of Q R (Eqn. (23)) is also a necessary condition of the marginal match condition (Eqn. (21)). The token-level target distribution can be simplified as Suppose Eqn. (21) is true. For any y ∈ Y − and t ≤ |y| and define y = y t 1 +eos and y = y t−1 1 +eos. Obviously, it follows y , y ∈ Y. Also, by definition, P R (y ) = P R (eos | y t 1 ) × P R (y t | y t−1 1 ) × P R (y t−1 1 ) P R (y ) = P R (eos | y t−1 1 ) × P R (y t−1 1 ) Then, consider the ratio Now, by the terminal condition (Eqn. (22)), we essentially have which completes the proof.
Corollary 1. Please refer to §3.2 for the Corollary.
Proof. Similarly, we drop the dependency on x * and y * to avoid clutter. We first prove the equivalence of Q * (y t−1 1 , y t ) with Q R (y t−1 1 , y t ) by induction. Hence, For the first case, it directly follows For the second case, since only eos is allowed to be generated, the target distribution P Q R should be a single-point distribution at eos. This is equivalent to define which proves the second case. Combining the two cases, it concludes , a), ∀y ∈ Y, a ∈ W.
log P θ (y t | y t−1 1 ) (T is longest possible length) In RAML, we want to optimize the cross entropy CE (P R (Y | x * , y * ) P θ (Y | x * )). As discussed in §2.1, directly sampling from the exponentiated pay-off distribution P R (Y | x * ) is impractical. Hence, normalized importance sampling has been exploited in previous work (Norouzi et al., 2016;Ma et al., 2017). Define the proposal distribution to be P S (Y | x * , y * ). Then, the objective can be rewritten as exp(R(y,y * )/τ ) P S (y|x * ,y * ) E y ∼P S (Y|x * ,y * ) exp(R(y ,y * )/τ ) P S (y |x * ,y * ) where w(y, y * ) = exp(R(y,y * )/τ ) P S (y|x * ,y * ) is the unnormalized importance weight,P S denotes the unnormalized probability of P S =P S Z , M is the number of samples used, and y (i) is the i-th sample drawn from the proposal distribution P S (Y | x * , y * ).
With importance sampling, the problem turns to what proposal distribution we should use. In the original work (Norouzi et al., 2016), the proposal distribution is defined by the hamming distance as used. Ma et al. (2017) find that it suffices to perform N -gram replacement of the reference sentence. Specifically, P S (Y | x * , y * ) can be a uniform distribution defined on set Y ngram where Y ngram is obtained by randomly replacing an n-gram of y * (n ≤ 4).
In this work, we adapt the simple n-gram replacement distribution, denoted as P ngram (Y | x * , y * ), which simplifies the RAML objective into Following Ma et al. (2017), we make sure the reference sequence is always among the M samples used.

B.2 VAML
As discussed in §4, the VAML training consists of two phases: • In the first phase, Soft Q-Learning is used to train Q φ based on Eqn. (16). Since Soft Q-Learning accepts off-policy trajectories, in this work, we use two types of off-policy sequences: