WMT 2016 Multimodal Translation System Description based on Bidirectional Recurrent Neural Networks with Double-Embeddings

Bidirectional Recurrent Neural Networks (BiRNNs) have shown outstanding results on sequence-to-sequence learning tasks. This architecture becomes specially interesting for multimodal machine translation task, since BiRNNs can deal with images and text. On most translation systems the same word embedding is fed to both BiRNN units. In this paper, we present several experiments to enhance a base-line sequence-to-sequence system (Elliott et al., 2015), for example, by using double embeddings. These embeddings are trained on the forward and backward direction of the input sequence. Our sys-tem is trained, validated and tested on the Multi30K dataset (Elliott et al., 2016) in the context of the WMT 2016 Multimodal Translation Task. The obtained results show that the double-embedding approach performs signiﬁcantly better than the traditional single-embedding one.


Introduction
Sequence-to-sequence learning is a new common approach to translation problems (Sutskever et al., 2014). The basic idea consists in mapping the input sentence into a vector of fixed dimensionality with a Recurrent Neural Network (RNN) and, then, do the reverse step to map the vector to the target sequence. From this new perspective, multimodal translation (Elliott et al., 2015) has become a feasible task. In particular, we are referring to the WMT 2016 multimodal task that consists in translating English sentences into German, given the English sentence itself and the image that it describes. This paper describes our participation in this task using a translation scheme based on Bidi-rectional RNNs (BiRNNs) which allows to combine both information from image and text.
In this paper, we take as baseline system the one from (Elliott et al., 2015) and focus on experimenting with the word embedding system and encoding techniques.
The rest of the paper is organised as follows. Section 2 briefly describes related work on image captioning and machine translation. Section 3 gives details about the architecture of the multimodal translation system. Section 4 reports details on the experimental framework including the parameters of our model and the results obtained. Finally, Section 5 concludes and comments on further work.

Related work
Image captioning has gained interest in the community and deep learning has been applied in this area. The two most common caption-related problems are caption generation  and caption translation (Elliott et al., 2015).
Similarly, machine translation approaches based on neural networks (Sutskever et al., 2014; are competing with standard phrase-based systems (Koehn et al., 2003). Neural machine translation uses an encoder-decoder structure . The implementation of an attention-based mechanism (Bahdanau et al., 2015) has allowed to achieve state-of-the-art results. The community is actively investigating in this approach and there have been enhancements related to addressing unknown words (Luong et al., 2015), integrating language modeling (Gülçehre et al., 2015), using character information in addition to words (Costa-jussà and Fonollosa, 2016) or even combining different languages (Firat et al., 2016), among others.

System description
This section describes the main architectures that have been tested to build the final system.

Baseline approach
The baseline system is a RNN model over word sequences (Elliott et al., 2015), which can use visual and linguistic modalities. The core model is a RNN over word sequences, trained to predict the next word in the sequence, given the sequence so far. The input sequence is codified in 1-of-K vector, which is embedded into a high-dimensional vector. Then, a unidirectional RNN is used. Finally, in the output layer, the softmax function is used to predict the next word. This model is extended to a multimodal language model, where sequence generation in addition to be conditioned on the previously seen words, are conditioned on image features. The translation model simply adds features from the source language model, following work from (Sutskever et al., 2014; and calling the source language model the encoder and the target language model the decoder.

Sequence-to-sequence approach and enhancements
Inspired by the architecture presented in (Sutskever et al., 2014), we train a system based on the many-to-many encoder-decoder architecture. It accepts a sequence x 1 , .., x N as input and returns a sequence y 1 , .., y N , where N is the maximum sequence length allowed.
The architectures that we have tested start in a unidirectional encoder-decoder, then we use a bidirectional encoder-decoder, a bidirectional encoder-decoder with double embeddings, and a final architecture that accepts a combination of input text and image. See Figure 1  Architecture (A) The model receives as input the codifications 1-of-K of the source sequence x 1 ...x n , then the word embedding is computed, obtaining a new representation E(x 1 )...E(x n ). This new sequence is processed by a RNN L, obtaining the vectors L 1 ...L n . These vectors are processed by another RNN D, obtaining the sequence D 1 ...D n , which is processed by a conventional neural network obtaining the target vectors which are normalised using sof tmax.

Architecture (B)
The main difference is that we are using BiRNNs, processing the input sentence forward and backward. The BiRNN is implemented with LSTMs (Long Short Term Memories) for better long-term dependencies handling (Hochreiter and Schmidhuber, 1997;Chung et al., 2014). The BiRNN are represented by unit L, but in this case, one in each direction, generating two vectors Lf i and Lb i , corresponding to each input x i . Architecture (C) In addition to using BiRNNs, each input codification is processed by two different feed-forward neural networks E f and E b , generating two vectors E f (x 1 )...E f (x n ) and At each timestep the pair of vectors are fed to the BiRNN Lf and Lb.
Architecture (D) Finally, the last architecture proposes to introduce an image. See Figure 3.2. This is the main advantage of using a machine translation system based on neural networks: we can use multimodal inputs. In this case, image and text. The model in this case has two inputs: the input text sequence x 1 ...x n and the image vector, which is the result of intermediate layers of a pretrained convolutional neural network (Simonyan and Zisserman, 2014).

Data
The system is developed, trained and tested with the Multi30K dataset provided by the WMT organization. On our experiments, all characters are converted to lower case. The chosen vocabulary consists on all the training source words and all the training target words that appear more than once. This choice is made to minimise the number of unknown tokens at the source sentences and to avoid an excessive model size and training time.

Model training
Each source sentence is encoded onto a N × V matrix M , where each row represents a 1-of-K encoding of a word over a source vocabulary with V words. An unknown word is replaced by an special <U> token and a <E> token is appended at the end of the sequence. If the sequence length (including <E>) is less than N the remaining rows will be zeros. If the sequence is too long, then it is truncated in order to suit the input size restrictions. During the training phase, target sentences also have a <B> token before the first word. For a given example, the generated prediction is considered to be all the words generated between the <B> and <E> tokens. Unknown tokens are replaced by the second highest probability word.  Training is performed on batches of size 10000 and on mini-batches of size 128. The target metric is the categorical cross entropy and the used optimiser is Adam (Kingma and Ba, 2014). Results are validated at each epoch on the dataset validation split using the BLEU metric (Papineni et al., 2002), along with model perplexity.
BLEU scores during validation are also used as an early stop criteria in case the maximum score so-far is not surpassed on the following 10 epochs. In order to evaluate our system performance obtained results are compared against a single-embedding system trained under the same conditions and parameters. Their BLEU score monitorization can be observed in Figure 3 and the chosen parameter set is summarised in Table 1. Table 2 shows the BLEU and METEOR (Lavie and Denkowski, 2009) results for the main architectures described in section 3 for the official test set of the WMT 2016 Multimodal Translation We see that using BiRNNs improve vs RNNs, and double-embeddings improves over singleembeddings. Finally, adding the image information does not improve results. Therefore, the best architecture (C) is the one that participated in WMT 2016 Multimodal Translation Task. Official results ranked our system in the 14th position out of 16. We priorised participating with a pure multimodal extensible architecture. However, we know it would have improved our ranking just performing a simple technique as rescoring our system with a standard Moses (Koehn et al., 2007  The best architecture (C) (compared to using one embedding) is capable of solving problems like unknown words or chosing the appropriate word. Table 3 shows an example that shows the word fixation problem.

Results
However, our generated translations have often many repeated words or end prematurely, mainly due to the differences in lengths and alignments between source and target sentences and the lack of feedback from previous timesteps. In any case, our system is still capable to generate readable translations and to replace unknown words with similar ones.
Source a man sleeping in a green room on a couch Generated ein mann schlaft in einem grünen grünen auf einem sofa Reference ein mann schlaft in einem grünen raum auf einem sofa Table 3: An example that shows the word fixation problem Also, our system performance drastically decreases on long sentences, or on sentences where the length of the source and target sentences differ too much.

Conclusions
Our system is not competitive compared to standard phrase-based system (Koehn et al., 2003) or the auto-encoder neural machine translation system (Bahdanau et al., 2015) as shown by our ranking in the official evaluation (14 position out of 16). However, the architecture of our system makes it feasible to introduce image information. Maybe in a larger corpus we would get competitive results.
All software is freely available in github 1 . The main contribution of this paper is that we show that double embeddings (trained on forward and backward input sequence) provides a significant improvement over single embeddings.
As further work, we are considering experimenting towards replacing the word based encoder for a character-based embedding (Costa-jussà and Fonollosa, 2016), or to introduce attention-based decoders (Bahdanau et al., 2014). Due to the system's modularity, it is also possible to reuse intermediate outputs to train additional models. For example, it is possible to extract the BiRNN intermediate outputs and fed them to another decoder model, thus reducing training time.