Pervasive Attention: 2D Convolutional Neural Networks for Sequence-to-Sequence Prediction

Current state-of-the-art machine translation systems are based on encoder-decoder architectures, that first encode the input sequence, and then generate an output sequence based on the input encoding. Both are interfaced with an attention mechanism that recombines a fixed encoding of the source tokens based on the decoder state. We propose an alternative approach which instead relies on a single 2D convolutional neural network across both sequences. Each layer of our network re-codes source tokens on the basis of the output sequence produced so far. Attention-like properties are therefore pervasive throughout the network. Our model yields excellent results, outperforming state-of-the-art encoder-decoder systems, while being conceptually simpler and having fewer parameters.


Introduction
Deep neural networks have made a profound impact on natural language processing technology in general, and machine translation in particular (Kalchbrenner and Blunsom, 2013;Sutskever et al., 2014;Cho et al., 2014;Jean et al., 2015;LeCun et al., 2015).Machine translation (MT) can be seen as a sequence-to-sequence prediction problem, where the source and target sequences are of different and variable length.Current state-of-the-art approaches are based on encoderdecoder architectures (Kalchbrenner and Blunsom, 2013;Sutskever et al., 2014;Cho et al., 2014;Bahdanau et al., 2015).The encoder "reads" the variable-length source sequence and maps it into a vector representation.The decoder takes this vector as input and "writes" the target sequence, updating its state each step with the most recent word that it generated.The basic encoder-decoder model is generally equipped with an attention model (Bahdanau et al., 2015), which repetitively re-accesses the source sequence during the decoding process.Given the current state of the decoder, a probability distribution over the elements in the source sequence is computed, which is then used to select or aggregate features of these elements into a single "context" vector that is used by the decoder.Rather than relying on the global representation of the source sequence, the attention mechanism allows the decoder to "look back" into the source sequence and focus on salient positions.Besides this inductive bias, the attention mechanism bypasses the problem of vanishing gradients that most recurrent architectures encounter.
However, the current attention mechanisms have limited modeling abilities and are generally a simple weighted sum of the source representations (Bahdanau et al., 2015;Luong et al., 2015), where the weights are the result of a shallow matching between source and target elements.The attention module re-combines the same source token codes and is unable to re-encode or re-interpret the source sequence while decoding.
To address these limitations, we propose an alternative neural MT architecture, based on deep 2D convolutional neural networks (CNNs).The product space of the positions in source and target sequences defines the 2D grid over which the network is defined.The convolutional filters are masked to prohibit accessing information derived from future tokens in the target sequence, obtaining an autoregressive model akin to generative models for images and audio waveforms (Oord et al., 2016a,b).See Figure 1 for an illustration.
This approach allows us to learn deep feature hierarchies based on a stack of 2D convolutional layers, and benefit from parallel computation during training.Every layer of our network computes features of the the source tokens, based on the target sequence produced so far, and uses these to predict the next output token.Our model therefore has attention-like capabilities by construction, that are pervasive throughout the layers of the network, rather than using an "add-on" attention model.
We validate our model with experiments on the IWSLT 2014 German-to-English (De-En) and English-to-German(En-De) tasks.We improve on state-of-the-art encoder-decoder models with attention, while being conceptually simpler and having fewer parameters.
In the next section we will discuss related work, before presenting our approach in detail in Section 3. We present our experimental evaluation results in Section 4, and conclude in Section 5.

Related work
The predominant neural architectures in machine translation are recurrent encoder-decoder networks (Graves, 2012;Sutskever et al., 2014;Cho et al., 2014).The encoder is a recurrent neural network (RNN) based on gated recurrent units (Hochreiter and Schmidhuber, 1997;Cho et al., 2014) to map the input sequence into a vector representation.Often a bi-directional RNN (Schuster and Paliwal, 1997) is used, which consists of two RNNs that process the input in opposite directions, and the final states of both RNNs are concatenated as the input encoding.The decoder consists of a second RNN, which takes the input encoding, and sequentially samples the output sequence one token at a time whilst updating its state.
While best known for their use in visual recognition models, (Oord et al., 2016a;Salimans et al., 2017;Reed et al., 2017;Oord et al., 2016c).Recent works also introduced convolutional networks to natural language processing.The first convolutional apporaches to encoding variablelength sequences consist of stacking word vectors, applying 1D convolutions then aggregating with a max-pooling operator over time (Collobert and Weston, 2008;Kalchbrenner et al., 2014;Kim, 2014).For sequence generation, the works of Ranzato et al. (2016); Bahdanau et al. (2017); Gehring et al. (2017a) mix a convolutional encoder with an RNN decoder.The first entirely convolutional encoder-decoder models where introduced by Kalchbrenner et al. (2016b), but they did not improve over state-of-the-art recurrent architectures.Gehring et al. (2017b) outperformed deep LSTMs for machine translation 1D CNNs with gated linear units (Meng et al., 2015;Oord et al., 2016c;Dauphin et al., 2017) in both the encoder and decoder modules.
Such CNN-based models differ from their RNN-based counterparts in that temporal connections are placed between layers of the network, rather than within layers.See Figure 2 for a conceptual illustration.This apparently small difference in connectivity has two important consequences.First, it makes the field of view grow linearly across layers in the convolutional network, while it is unbounded within layers in the recurrent network.Second, while the activations in the RNN can only be computed in a sequential manner, they can be computed in parallel across the temporal dimension in the convolutional case.
In all the recurrent or convolutional models mentioned above, each of the input and output sequences are processed separately as a onedimensional sequence by the encoder and decoder respectively.Attention mechanisms (Bahdanau et al., 2015;Luong et al., 2015;Xu et al., 2015) were introduced as an interface between the encoder and decoder modules.During encoding, the attention model finds which hidden states from the source code are the most salient for generating the next target token.This is achieved by evaluating a "context vector" which, in its most basic form, is a weighted average of the source features.The weights of the summation are predicted by a small neural network that scores these features conditioning on the current decoder state.Vaswani et al. (2017) propose an architecture relying entirely on attention.Positional input coding together with self-attention (Parikh et al., 2016;Lin et al., 2017) replaces recurrent and convolutional layers.Huang et al. (2018) use an attentionlike gating mechanism to alleviate an assumption of monotonic alignment in the phrase-based translation model of Wang et al. (2017).Deng et al. (2018) treat the sentence alignment as a latent variable which they infer using a variational inference network during training to optimize a variational lower-bound on the log-likelihood.
Beyond uni-dimensional encoding/decoding.The idea of building a 2D grid from parallel sequences (as in Figure 1) is used in different NLP tasks especially for scoring parallel texts.This includes works on semantic matching, paraphrase identification and machine translation.ARC-II of Hu et al. (2014) has 1D convolutions applied to each sequence separately before a series of 2D convolutions and max-poolings are followed by an MLP to estimate the matching score.They interestingly highlighted the desirable property of letting the sequences 'meet' before their representations mature.He and Lin (2016); Wan et al. (2016) first encode the sequences with Bi-LSTMs then evaluate pairwise similarities between the words of the two sequences to build an interaction grid.While He and Lin (2016) process the grid with a two-dimensional CNN, Wan et al. (2016) directly use k-max pooling to aggregate and then score the pair.Similarly, for sequence alignment, Levy and Wolf (2017) use LSTM hidden states as tokens representations and, similar to our work, concatenate pairwise representations and feed their input grid to a 2D convolutional network followed by a soft-max to estimate soft-alignment probablities.Recently in question-answering, Raison et al. (2018) weaved two Bi-LSTMs, one along the context dimension and the other along the question dimension in order to identify a response span in the context.
More related to our work on machine translation, Kalchbrenner et al. (2016a) proposed the 'reencoder' network where a Grid LSTM processes both sequences along its first and second dimension, allowing the model to re-encode the source sequence as it advances along the target dimension.They also observed that such a structure implements an implicit form of attention.Wu et al. (2017) used a CNN over the 2D sourcetarget representation, but only as a discriminator in an adversarial training setup.Similar to semantic matching models, they do not use masked convolutions, since their CNN is used to predict if a given source-target pair is a human or machine translation.Concurrently with our work, Bahar et al. (2018) used a 2DLSTM layer to jointly process the source and target sequences with a similar two-dimensional layout.

Translation by 2D Convolution
In this section we present our 2D CNN translation model in detail.
Input source-target tensor.Given the source and target pair (s, t) of lengths |s| and |t| respectively, we first embed the tokens in d s and d t dimensional spaces via look-up tables.The word embeddings {x 1 , . . ., x |s| } and {y 1 , . . ., y |t| } are then concatenated to form a 3D tensor X ∈ R |t|×|s|×f 0 , with f 0 = d t + d s , where (1) This joint unigram encoding is the input to our convolutional network.
Convolutional layers.We use the DenseNet (Huang et al., 2017) (Nair and Hinton, 2010) non-linearity.To reduce the computation cost, each layer first computes 4g channels using a 1×1 convolution from the f 0 + (l − 1)g input channels to layer l ∈ {1, . . ., L}.This is followed by a second batch-normalization and ReLU non-linearity.The second convolution has (k × k 2 ) kernels, i.e. masked as illustrated in Figure 1, and generates the g output features maps to which we apply dropout (Srivastava et al., 2014).The architecture of the densely connected network is illustrated in Figure 3.
We optionally use gated linear units (Dauphin et al., 2017) in both convolutions, these double the number of output channels, and we use half of them to gate the other half.
Target sequence prediction.Starting from the initial f 0 feature maps, each layer l ∈ {1, . . ., L} of our DenseNet produces a tensor H l of size |t| × |s| × f l , where f l is the number of output channels of that layer.To compute a distribution over the tokens in the output vocabulary, we need to collapse the second dimension of the tensor, which is given by the variable length of the input sequence, to retrieve a unique encoding for each target position.
The simplest aggregation approach is to apply max-pooling over the input sequence to obtain a tensor H pool ∈ R |t|×f L , i.e.
Alternatively, we can use average-pooling over the input sequence: The scaling with the inverse square-root of the source length acts as a variance stabilization term, which we find to be more effective in practice than a simple averaging.
The pooled features are then transformed to predictions over the output vocabulary V, by linearly mapping them with a matrix E ∈ R |V|×f L to the vocabulary dimension |V|, and then applying a soft-max.Thus the probability distribution over V for the i-th output token is obtained as ). (4) Alternatively, we can use E to project to dimension d t , and then multiply with the target word embedding matrix used to define the input tensor.This reduces the number of parameters and generally improves the performance.
Implicit sentence alignment.For a given output token position i, the max-pooling operator of Eq. ( 2) partitions the f L channels by assigning them across the source tokens j.Let us define } as the channels assigned to source token j for output token i.The energy that enters into the softmax to predict token w ∈ V for the i-th output position is given by The total contribution of the j-th input token is thus given by where we dropped the dependence on w for simplicity.As we will show experimentally in the next section, visualizing the values α ij for the groundtruth output tokens, we can recover an implicit sentence alignment used by the model.
Self attention.Besides pooling we can collapse the source dimension of the feature tensor with an attention mechanism.This mechanism will generate a tensor H att that can be used instead of, or concatenated with, H Pool .We use the self-attention approach of Lin et al. ( 2017), which for output token i computes the attention vector ρ i ∈ R |s| from the activations H L i : where w ∈ R f L and b ∈ R are parameters of the attention mechanism.Scaling of attention vectors with the square-root of the source length was also used by Gehring et al. (2017b), and we found it effective here as well as in the average-pooling case.

Experimental evaluation
In this section, we present our experimental setup, followed by quantitative results, qualitative examples of implicit sentence alignments from our model, and a comparison to the state of the art.

Experimental setup
Data and pre-processing.We experiment with the IWSLT 2014 bilingual dataset (Cettolo et al., 2014), which contains transcripts of TED talks aligned at sentence level, and translate between German (De) and English (En) in both directions.Following the setup of (Edunov et al., 2018), sentences longer than 175 words and pairs with length ratio exceeding 1.5 were removed from the original data.There are 160+7K training sentence pairs, 7K of which are separated and used for validation/development.We report results on a test set of 6,578 pairs obtained by concatenating TED.dev2010, TEDX.dev2012 and TED.tst2010-2012.We tokenized and lower-cased all data using the standard scripts from the Moses toolkit (Koehn et al., 2007).
For open-vocabulary translation, we segment sequences using byte pair encoding (Sennrich et al., 2016) with 14K merge operations following two approaches.The first (V1), similar to Edunov et al. (2018); Deng et al. (2018), is a joint encoding i.e. applied to the concatenation of source and target texts.This results in a German and English vocabularies of around 12K and 8.8K types respectively.The second approach (V2) encodes each language independently resulting in a German and English vocabularies of 13.3K and 13.8K respectively.
Implementation details.Unless stated otherwise, we use DenseNets with masked convolutional filters of size 5×3, as given by the light blue area in Figure 1.To train our models for the ablation study, we use maximum likelihood estimation (MLE) with Adam (β 1 = 0.9, β 2 = 0.999, = 1e −8 ) starting with a learning rate of 5e −4 that we scale by a factor of 0.8 if no improvement is noticed on the validation loss after three evaluations; we evaluate every 8K updates.For faster training and due to the increased computational requirements, from O(|x| + |y|) of encoder-decoder models to O(|x|.|y|), we only read sequences up to 80 positions.We also downsample the initial grid channels by half to reduce the number of input channels to every dense block, thus requiring less memory.After training all models for 40 epochs, the best performing model on the validation set is usd to decode with a beam-search of width 5. We measure translation quality using the BLEU metric (Papineni et al., 2002).
For the Bi-LSTM encoder-decoder, the encoder is a single layer bidirectional LSTM with input embeddings of size 128 and a hidden state of size 256 (128 in each direction).The decoder is a single layer LSTM with similar input size and a hidden size of 256, the target input embeddings are also used in the pre-softmax projection.For regularization, we apply a dropout of rate 0.2 to the inputs of both encoder and decoder and to the output of the decoder prior to softmax.As in (Bahdanau et al., 2015), we refer to this model as RNNsearch.
The ConvS2S model we trained has embeddings of dimension 256, a 16-layers encoder and 12-layers decoder.Each convolution uses 3×1 filters and is followed by a gated linear unit with a total of 2 × 256 channels.Residual connections link the input of a convolutional block to its output.We first trained the default architecture for this dataset as suggested in FairSeq (Gehring et al., 2017b), which has only 4 layers in the encoder and 3 in the decoder, but achieved better results with the deeper version described above.
The model is trained with label-smoothed crossentropy ( = 0.1) using Nesterov accelerated gradient with a momentum of 0.99 and an initial learning rate of 0.25 decaying by a factor of 0.1 every epoch.ConvS2S is also regularized with a dropout rate of 0.2.For the transformer model, we use token embeddings of dimension 512, and the encoder and decoder have 6 layers and 4 attention heads.For the inner layer in the per-position feed-forawrd network we use d f f = 1024.We optimize the label-smoothed ( = 0.1) cross-entropy loss with Adam (β 1 = 0.9, β 2 = 0.98, = 1e −8 ) (Kingma and Ba, 2015).The learning rate starts from 1e −7 and is increased during 4,000 warmup steps.Afterwards, the learning rate is set to 5e −4 and follows an inverse-square-root schedule (Vaswani et al., 2017).For the transformer we set the dropout to 0.3.

Experimental results
Architecture evaluation.In this section we explore the impact of several parameters of our model: the token embedding dimension, depth, growth rate and filter sizes.We also evaluate different aggregation mechanisms across the source dimension: max-pooling, average-pooling, and attention.
In each chosen setting, we train five models Table 1: BLEU scores of our model (L = 24, g = 32, d s = d t = 128) on the validation set with different pooling operators and using gated convolutional units.
with different initializations and report the mean and standard deviation of the validation set BLEU scores.We also state the number of parameters of each model and the computational cost of training, estimated in a similar way as Vaswani et al. (2017), based on the wall clock time of training and the GPU single precision specs.
In Table 1 we see that using max-pooling instead average-pooling across the source dimension increases the performance with around 2.3 BLEU points.Scaling the average representation with |s| Eq. (3) helped improving the performance but it is still largely outperformed by the max-pooling.Adding gated linear units on top of each convolutional layer does not improve the BLEU scores, but increases the variance due to the additional parameters.Stand-alone selfattention i.e. weighted average-pooling is slightly better than uniform average-pooling but it is still outperformed by max-pooling.Concatenating the max-pooled features (Eq.( 2)) with the represen-  tation obtained with self-attention (Eq.( 9)) leads to a small increase in performance, from 33.25 to 33.29.In the remainder of our experiments we only use max-pooling for simplicity, unless stated otherwise.
In Figure 4 we consider the effect of the token embedding size, the growth rate of the network, and its depth.The token embedding size together with the growth rate g control the dimension of the final feature used for estimating the emission probability.We generaly use the same embedding dimension for both languages i.e. d = d t = d s , thus the final representation is of size f L = 2d + gL.
In Figure 4 we see that a minimal dimension is required, in this case d = 128, in order for the model to be complex enough and capture the training data statistics.For embedding sizes between 128 and 256, the BLEU score slowly increases from 33 to 33.6 The depth of the network is of similar impact.Training deeper networks (from 4 to 24 layers) increases the BLEU score by about 5 points.An argument similar to the one about the growth rate can be made in this case too for networks with more than 24 layers.
The receptive field of our model is controlled by its depth and the filter size.In Table 2, we note that narrower receptive fields are better than larger ones with less layers at equivalent complextities e.g.Comparison to the state of the art.We compare our results to the state of the art in Table 3 for both directions German-English (De-En) and English-German (En-De).In this section, the parameters of our models are trained using label-smoothed cross-entropy ( = 0.1) similarly to the ConvS2S and Transformer baselines.To successfuly train our models with large embeddings (d = 512) we increase the dropout (p = 0.4) and normalize the initial 2D grid.For decodig we use a beam-search of width 5 enhaced with length and coverage penalties (Wu et al., 2016).
Our model has about the same number of parameters as RNNsearch (with V1 vocbaularies), yet improves performance by 3.88 BLEU points.It is also better than the recent work of Deng et al. (2018) on recurrent architectures with variational attention.
Our model outperforms its 1D convolutional counterpart Gehring et al. (2017b) in both translation directions and is competitive with transformer (0.3 points behind) while having about 2 to 4 times fewer parameters.
Performance across sequence lengths.In   7) and for self-attention the weights ρ of Eq. ( 8).
both max-pooling and attention qualitatively similar implicit sentence alignments emerge.Notice in the first example how the max-pool model, when writing I've been working, looks at arbeite but also at seit which indicates the past tense of the former.Also notice some cases of non-monotonic alignment.In the first example for some time occurs at the end of the English sentence, but seit einiger zeit appears earlier in the German source.For the second example there is non-monotonic alignment around the negation at the start of the sentence.The first example illustrates the ability of the model to translate proper names by breaking them down into BPE units.In the second example the German word Karriereweg is broken into the four BPE units karri,er,e,weg.The first and the fourth are mainly used to produce the English a carreer, while for the subsequent path the model looks at weg.Finally, we can observe an interesting pattern in the alignment map for several phrases across the three examples.A rough lower triangular pattern is observed for the English phrases for some time, and it's fantastic, and it's not, a little step, and in that direction.In all these cases the phrase seems to be decoded as a unit, where features are first taken across the entire corresponding source phrase, and progressively from the part of the source phrase that remains to be decoded.

Conclusion
We presented a novel neural machine translation architecture that departs from the encoder-decoder paradigm.Our model jointly encodes the source and target sequence into a deep feature hierarchy in which the source tokens are embedded in the context of a partial target sequence.Max-pooling over this joint-encoding along the source dimension is used to map the features to a prediction for the next target token.The model is implemented as 2D CNN based on DenseNet, with masked convolutions to ensure a proper autoregressive factorization of the conditional probabilities.
Since each layer of our model re-encodes the input tokens in the context of the target sequence generated so far, the model has attention-like properties in every layer of the network by construction.Adding an explicit self-attention module therefore has a very limited, but positive, effect.Nevertheless, the max-pooling operator in our model generates implicit sentence alignments that are qualitatively similar to the ones generated by attention mechanisms.We evaluate our model on the IWSLT'14 dataset, translation German to English and vice-versa.We obtain excellent BLEU scores that compare favorably with the state of the art, while using a conceptually simpler model with fewer parameters.
We hope that our alternative joint source-target encoding sparks interest in other alternatives to the encoder-decoder model.In the future, we plan to explore hybrid approaches in which the input to our joint encoding model is not provided by tokenembedding vectors, but the output of 1D source and target embedding networks, e.g.(bi-)LSTM or 1D convolutional.We also want to explore how our model can be used to translate across multiple language pairs.Our PyTorch-based implementation is available at https://github.com/elbayadm/attn2d.

Figure 1 :
Figure 1: Convolutional layers in our model use masked 3×3 filters so that features are only computed from previous output symbols.Illustration of the receptive fields after one (dark blue) and two layers (light blue), together with the masked part of the field of view of a normal 3×3 filter (gray).

Figure 4 :
Figure 4: The impact of token embedding size, number of layers (L), and growth rate (g) on the validation set BLEU scores.In blue the results with beam search (width=5) and in gray with greedy decoding.The bars show the total number of parameters (in millions) for each setup.
Figure 5 we consider translation quality as a function of sentence length, and compare our model to RNNsearch, ConvS2S and Transformer.Our model gives the best results across all sentence lengths, except for the longest ones where ConvS2S and Transformer are better.Overall, our model combines the strong performance of RNNsearch on short sentences with good performance of ConvS2S and Transformer on longer ones.Implicit sentence alignments.Following the method described in Section 3, we illustrate in Figure6the implicit sentence alignments the maxpooling operator produces in our model.For reference we also show the alignment produced by our model using self-attention.We see that with

Figure 6 :
Figure 6: Implicit BPE token-level alignments produced by our Pervasive Attention model.For the maxpooling aggregation we visualize α obtained with Eq. (7) and for self-attention the weights ρ of Eq. (8).

Table 2 :
Performance of our model (g = 32, d s = d t = 128) for different filter sizes k and depths L and filter sizes k on the validation set.

Table 3 :
Comparison to state-of-the art results on IWSLT German-English translation.(*): results obtained using our implementation.(**): results obtained using FairSeq