Making Monolingual Sentence Embeddings Multilingual using Knowledge Distillation

We present an easy and efficient method to extend existing sentence embedding models to new languages. This allows to create multilingual versions from previously monolingual models. The training is based on the idea that a translated sentence should be mapped to the same location in the vector space as the original sentence. We use the original (monolingual) model to generate sentence embeddings for the source language and then train a new system on translated sentences to mimic the original model. Compared to other methods for training multilingual sentence embeddings, this approach has several advantages: It is easy to extend existing models with relatively few samples to new languages, it is easier to ensure desired properties for the vector space, and the hardware requirements for training is lower. We demonstrate the effectiveness of our approach for 10 languages from various language families. Code to extend sentence embeddings models to more than 400 languages is publicly available.


Introduction
Mapping sentences or short text paragraphs to a dense vector space, such that similar sentences are close, has wide applications in NLP. It can be used for information retrieval, clustering, automatic essay scoring, and for semantic textual similarity.
However, most existent sentence embeddings models are monolingual, usually only for English, as applicable training data for other languages is scarce. For multi-and cross-lingual scenarios, only few sentence embeddings models exist.
In this publication, we present a new method that allows us to extend existent sentence embeddings models to new languages. We require a teacher model M for source language s and a set of parallel (translated) sentences ((s 1 , t 1 ), ..., (s n , t n )) with t i the translation of s i . Note, the t i can be in different languages. We train a new student modelM such thatM (s i ) ≈ M (s i ) andM (t i ) ≈ M (s i ) using mean squared loss. We call this approach multilingual knowledge distillation learning, as the studentM distills the knowledge of the teacher M in a multilingual setup. We demonstrate that this type of training works for various language combinations as well as for multilingual setups.
The student modelM learns a multilingual sentence embedding space with two important properties: 1) Vector spaces are aligned across languages, i.e., identical sentences in different languages are mapped to the same point, 2) vector space properties in the original source language from the teacher model M are adopted and transferred to other languages.
The presented approach has various advantages compared to other training approaches for multilingual sentence embeddings. LASER (Artetxe and Schwenk, 2018) trains an encoder-decoder LSTM model using a translation task. The output of the encoder is used as sentence embedding. While LASER works well for identifying exact translations in different languages, it works less well for accessing the similarity of sentences that are not exact translations. When using the training method of LASER, we are not able to influence the properties of the vector space, for example, we cannot design a vector space to work well for a specific clustering task. With our approach, we can first create a vector space suited for clustering on some high-resource language, and then transfer it to other languages.
Multilingual Universal Sentence Encoder (mUSE) (Chidambaram et al., 2018;Yang et al., 2019) was trained in a multi-task setup on SNLI (Bowman et al., 2015) and on over a billion  Figure 1: Given parallel data (e.g. English and German), train the student model such that the produced vectors for the English and German sentences are close to the teacher English sentence vector. question-answer pairs from popular online forums and QA websites. In order to align the vector spaces cross-lingual, mUSE used a translation ranking task. Given a translation pair (s i , t i ) and various alternative (incorrect) translations, identify the correct translation. First, multi-task learning can be difficult as it can suffer from catastrophic forgetting and balancing multiple tasks is not straight forward. Further, running the translation ranking task is complex and results in a huge computational overhead. Selecting random alternative translations usually leads to mediocre results. Instead, hard negatives (Guo et al., 2018) are required, i.e., alternative incorrect translations that have a high similarity to the correct translation. Getting these hard negative samples is non-trivial: mUSE first trained the network with random negatives samples, then used this preliminary sentence encoder to identify for each translation pair five hard negative examples (incorrect, but similar translations). It then re-trained the network. Our proposed method does not require balancing multi-task learning, nor does it require hard negative samples, making training simpler and faster.
In this publication, we use Sentence-BERT (SBERT) (Reimers and Gurevych, 2019), which achieves state-of-the-art performance for various sentence embeddings task. SBERT is based on transformer models like BERT (Devlin et al., 2018) and applies mean pooling on the output to derive a fixed sized sentence embedding. In our experiments we use XML-RoBERTa (XLM-R) (Conneau et al., 2019), a transformer network pre-trained on 100 languages, as student model. Note, the described approach is not limited to be used with transformer models and should also work with other network architectures.

Training
We require a teacher model M , that maps sentences in one or more source languages s to a dense vector space. Further, we need parallel (translated) sentences ((s 1 , t 1 ), ..., (s n , t n )) with s i a sentence in one of the source languages and t i a sentence in one of the target languages.
We train a student modelM such thatM For a given minibatch B, we minimize the mean-squared loss: M could have the structure and the weights of M , or it can be a different network architecture with completely different weights. This training procedure is illustrated in Figure 1. We denote trained models withM ← M , as the student modelM learns the representation of the teacher model M .
In our experiments, we mainly use an English SBERT model as teacher model M and use XLM-RoBERTa (XLM-R) as student modelM . The English BERT models have a wordpiece vocabulary size of 30k mainly consisting of English tokens.
Using the English SBERT model as initialization forM would be suboptimal, as most words in other latin-based languages would be broken down to short character sequences, and words in nonlatin alphabets would be mapped to the UNK token. In contrast, XLM-R uses SentencePiece 2 , which avoids language specific pre-processing. Further, it uses a vocabulary with 250k entries from 100 different languages. This makes XLM-R much more suitable the for initialization of the multilingual student model.

Training Data
In this section, we evaluate the importance of training data for making the sentence embedding model multilingual. The OPUS website 3 (Tiedemann, 2012) provides parallel data for hundreds of language pairs. In our experiments, we use the following datasets: • GlobalVoices: A parallel corpus of news stories from the web site Global Voices.
• TED2020: We crawled the translated subtitles for about 4,000 TED talks, available in over 100 languages. Resource is available in our repository.
• NewsCommentary: Political and economic commentary crawled from the web site Project Syndicate, provided by WMT.
• WikiMatrix: Mined parallel sentences from Wikipedia in different languages . We only used pairs with scores about 1.05, as pairs below this threshold were often of bad quality.
• Tatoeba: Tatoeba 4 is a large database of example sentences and translations to support language learning.
Getting parallel sentence data can be challenging for some low-resource language pairs. Hence, we also experiment with bilingual dictionaries: • MUSE: MUSE 5 provides 110 large-scale ground-truth bilingual dictionaries created by an internal translation tool (Conneau et al., 2017b).
• Wikititles: We use the Wikipedia database dumps to extract the article titles from crosslanguage links between Wikipedia articles. For example, the page "United States" links to the German page "Vereinigte Staaten". This gives a dictionary covering a wide rang of topics.
The data set sizes for English-German (EN-DE) and English-Arabic (EN-AR) are depicted in Table  4. For training, we balance the data set sizes by drawing for a mini batch roughly the same number of samples from each data set. Data from smaller data sets is repeated.
We trained XLM-R as our student model and used SBERT fine-tuned on English NLI and STS data 6 as our teacher model. We trained for a maximum of 20 epochs with batch size 64, 10,000 warm-up steps, and a learning rate of 2e-5. As development set, we measured the MSE loss on hold-out parallel sentences.
In Gurevych, 2017, 2018), we showed that the random seed can have a large impact on the performances of trained models, especially for small datasets. In the following experiments, we have quite large datasets of up to several million parallel sentences and we observed rather minor differences (∼ 0.3 score points) between random seeds.

Experiments
In this section, we conduct experiments on two tasks: Multi-and cross-lingual semantic textual similarity (STS) and bitext retrieval. STS assigns a score for a pair of sentences, while bitext retrieval identifies parallel (translated) sentences from two large monolingual corpora.
Note, evaluating the capability of different strategies to align vector spaces across languages is non-trivial. The performance for cross-lingual tasks depends on the ability to map sentences across languages to one vector space (usually the vector space for English) as well as on the properties this source vector space has. Differences in performance can then be due a better or worse alignment between the languages or due to different properties of the (source) vector space.
We evaluate the following systems: SBERT-nli-stsb: The output of the BERT-base model is combined with mean pooling to create a fixed-sized sentence representation (Reimers and Gurevych, 2019). It was fine-tuned on the English AllNLI (SNLI (Bowman et al., 2015) and Multi-NLI ) dataset and on the English training set of the STS benchmark (Cer et al., 2017) using a siamese network structure.
mBERT / XLM-R mean: Mean pooling of the outputs for the pre-trained multilingual BERT (mBERT) and XLM-R model. These models are pre-trained on multilingual data and have a multilingual vocabulary. However, no parallel data was used.
mBERT-/ XLM-R-nli-stsb: We fine-tuned XLM-R and mBERT on the (English) AllNLI and the (English) training set of the STS benchmark.
LASER: LASER (Artetxe and Schwenk, 2018) uses max-pooling over the output of a stacked LSTM-encoder. The encoder was trained in a encoder-decoder setup (machine translation setup) on parallel corpora over 93 languages. mUSE: Multilingual Universal Sentence Encoder (Chidambaram et al., 2018) uses a dualencoder transformer architecture and was trained on mined question-answer pairs, SNLI data, translated SNLI data, and parallel corpora over 16 languages.

Multilingual Semantic Textual Similarity
The goal of semantic textual similarity (STS) is to assign for a pair of sentences a score indicating their semantic similarity. For example, a score of 0 indicates not related and 5 indicates semantically equivalent.
The multilingual STS 2017 dataset (Cer et al., 2017) contains annotated pairs for EN-EN, AR-AR, ES-ES, EN-AR, EN-ES, EN-TR. We extend this dataset by translating one sentence of each pair in the EN-EN dataset to German. Further, we use Google Translate to create the datasets EN-FR, EN-IT, and EN-NL. Samples of these machine translated versions have been checked by humans fluent in that language.
We generate sentence embeddings with the described systems and compute their similarity using cosine similarity. We then compute the Spearman's rank correlation ρ between the computed score and the gold score.
We trained a single model on 10 languages 7 on all the available training datasets. Table 1 shows the performance of this model for the extended STS 2017 dataset for the same language setup, while Table 2 shows the results for the crosslingual setup.
SBERT-nli-sts works surprisingly well for the ES-ES data. For the AR-AR data, we see a strong performance drop. This is likely because Arabic uses a non Latin alphabet, which is mapped by BERT to out-of-vocabulary tokens. The English SBERT-nli-sts does not work well for any of the cross-lingual experiments (Table 2).
Using mBERT / XLM-R out-of-the-box with mean pooling yields rather poor performances. In the same language setup (Table 1), it achieves an average correlation of only 54.0 and 42.7. For the cross-lingual setup (Table 2), the performance drops to 27.2 and 17.8. Mean pooling of out-ofthe-box BERT embeddings yields unsuitable vector spaces for comparisons with cosine similarity.
Training mBERT / XLM-R on English NLI and STS data improves significantly the performance for the same language setup and achieves a performance that is on par with LASER, that was trained on cross-lingual data. However, for the crosslingual setup (Table 2), we see a strong performance drop and more than 10 points worse results than LASER. These models can create meaningful vector spaces for sentences in the same languages. However, these vector spaces are not well-aligned across languages, as we see in Table 2.
Using our multilingual knowledge distillation approach, we observe a slight performance drop between SBERT-nli-stsb and ← SBERT-nli-stsb for the EN-EN task. However, for the ES-ES and AR-AR task, we observe a significant improvement and the model achieves a performance similar to that on the EN-EN dataset. For cross-lingual data (Table 2), we observe significantly improved performances compared to our baselines. Further, we observe a significant improvement of about 10 points in comparison to LASER. We omitted multilingual Universal Sentence Encoder (mUSE) in  this experiment, as mUSE was trained on the underlying data from STS 2017.
In our experiments, XLM-R is slightly ahead of mBERT and DistilmBERT. mBERT and Distilm-BERT use different language-specific tokenization tools, making those models more difficult to be used on raw text. In contrast, XLM-R uses a Sen-tencePiece model that can be applied directly on raw text data for all languages. Hence, in the following experiments we only report results for XLM-R.

BUCC: bitext retrieval
Given two corpora in different languages, the task is to identify sentence pairs that are translations. A straightforward approach is to take the cosine similarity of the respective sentence embeddings and to use nearest neighbor retrieval with a threshold to find translation pairs. However, it was shown that this approach has certain issues (Guo et al., 2018).
For our experiments, we use the BUCC bitext retrieval code from LASER 8 . It implements the scoring function from Artetxe and Schwenk (2019): 8 https://github.com/facebookresearch/ LASER/ score(x, y) = margin(cos(x, y), cos(y, z) 2k with x, y the two sentence embeddings and NN k (x) denoting the k nearest neighbors of x in the other language, which are retrieved using faiss 9 . As margin function, we use margin(a, b) = a/b.
We use the dataset from the BUCC mining task (Zweigenbaum et al., 2017(Zweigenbaum et al., , 2018, with the goal of extracting parallel sentences between an English corpus and four other languages: German, French, Russian, and Chinese. The corpora consist of 150K -1.2M sentences for each language with about 2-3% of the sentences being parallel. The data is split into training and test sets. The training set is used to find a threshold for the score function. Pairs above the threshold are returned as parallel sentences. Performance is measured using F 1 score.
Results are shown in Table 3. Using mean pooling directly on mBERT / XLM-R produces low scores. XLM-R mean achieves only an F 1 of 11.6.  Table 3: F 1 score on the BUCC bitext mining task.
While training on English NLI and STS data improves the performance for XLM-R (XLM-R-nlistsb), it reduces the performance for mBERT. It is unclear why mBERT mean and XLM-R mean produce vastly different scores and why training on NLI data improves the cross-lingual performance for XLM, while reducing the performance for mBERT. But in conclusion, we see that mBERT / XLM-R do not have well aligned vector spaces and training only on English data is not sufficient for cross-lingual tasks. Using our multilingual knowledge distillation method (XLM-R ← SBERT-nli-stsb), we were able to significantly improve the performance compared to the mBERT / XLM-R model trained only on English data. However, mUSE outperforms our models, and LASER significantly outperforms the mUSE models.
The imitated SBERT-nli-stsb model creates a vector space such that semantically similar sentences are close. However, sentences with similar meanings must not be translations of each other. For example, in the BUCC data, the following pair is not labeled as parallel text: Both sentences are semantically similar, hence our model assigned a high similarity score. But the pair is not a translation, as some details are missing (exact dates and location).
These results stress the point that there is no single sentence vector space universally suitable for every application. LASER was trained on translation data, hence, it works well to identify perfect translations. However, it performs less well for the task of STS when it has to score sentence pairs that are only to some degree similar. In contrast, SBERT-nli-stsb works well to judge the semantic similarity of sentences, but it has difficulties to distinguish between translations and non-translation pairs with high similarities. In general, it is important to use the sentence embeddings method with the right properties for the desired downstream task.
We noticed that several positive pairs are missing in the BUCC dataset. We analyzed for SBERT, mUSE, and LASER 20 false positive DE-EN pairs each, i.e., we analyzed pairs with high similarities according to the embeddings method but which are not translations according to the dataset. For 57 out of 60 pairs, we would judge them as valid, high-quality translations. This issue comes from the way BUCC was constructed: It consists of a parallel part, drawn from the News Commentary dataset, and sentences drawn from Wikipedia, which are judged as non-parallel. However, it is not ensured that the sentences from Wikipedia are in fact non-parallel. The systems successfully returned parallel pairs from the Wikipedia part of the dataset. Results based on the BUCC dataset should be judged with care. It is unclear how many parallel sentences are in the Wikipedia part of the dataset and how this affects the scores.

Evaluation of Training Datasets
To evaluate the suitability of the different training sets, we trained bilingual XLM-R models for EN-DE and EN-AR on the described training datasets. English and German are fairly similar languages and have a large overlap in their alphabets, while English and Arabic are dissimilar languages with distinct alphabets. We evaluate the performance on the STS 2017 dataset.
The results for training on the full datasets are depicted in Table 4. In Table 5, we trained the models only on the first k sentences of the TED2020 dataset. First, we observe that the bilingual models are slightly better than the model trained for 10 languages (section 4.1): 2.2 points improvement for EN-DE and 1.2 points improvement for EN-AR. Conneau et al. (2019) calls this curse of multilinguality, where adding more languages to a model can degrade the performance as the capacity of the model remains the same.   For the similar languages EN-DE we observe only minor differences between the training datasets. It appears that the domain of the training data (news, subtitles, parliamentary debates, magazines) is not that important. As shown in Table  5, only little training data for similar languages is necessary. With only 1,000 parallel sentences, we achieve already a score of 71.8. With 25,000 sentences, we achieve a performance nearly onpar with the full German training set of 25 Million parallel sentences.
For the dissimilar languages English and Arabic, the results are less conclusive. Table 4 shows that more data does not necessarily lead to better results. With the Tatoeba dataset (only 27,000 parallel sentences), we achieve a score of 76.7, while with the UNPC dataset (over 8 Million sentences), we achieve only a score of 66.1. The domain and complexity of the parallel sentences are of higher importance for dissimilar languages. The results on the reduced TED2020 dataset (Table 5) show that the score improves slower for EN-AR than for EN-DE with more data.
Our experiments with bilingual dictionaries show that a significant improvement over an English-only baseline can be achieved. For EN-DE, about 94% and for EN-AR about 87% of the performance of the full dataset model can be achieved.
We conclude that for similar languages, like English and German, the training data is of minor importance. Already small datasets or even only bilingual dictionaries are sufficient to achieve a quite high performance. For dissimilar languages, like English and Arabic, the type of training data is of higher importance. Further, more data is necessary to achieve good results.

Related Work
Sentence embeddings are a well studied area with dozens of proposed methods. Skip-Thought (Kiros et al., 2015) trains an encoder-decoder architecture to predict the surrounding sentences. InferSent (Conneau et al., 2017a) uses labeled data of the Stanford Natural Language Inference dataset (Bowman et al., 2015) and the Multi-Genre NLI dataset  to train a siamese BiLSTM network with max-pooling over the output. Conneau et al. showed, that InferSent consistently outperforms unsupervised methods like SkipThought. Universal Sentence Encoder  trains a transformer network and augments unsupervised learning with training on SNLI. Hill et al. (2016) showed, that the task on which sentence embeddings are trained significantly impacts their quality. Previous work (Conneau et al., 2017a; found that the SNLI datasets are suitable for training sentence embeddings.  presented a method to train on conversations from Reddit using siamese DAN and siamese transformer networks, which yielded good results on the STS benchmark dataset. The previous methods have in common that they were only trained on English. Multilingual representations have attracted significant attention in recent times. Most of it focuses on cross-lingual word embeddings (Ruder, 2017). A common approach is to train word embeddings for each language separately and to learn a linear transformation that maps them to shared space based on a bilingual dictionary . This mapping can also be learned without parallel data (Conneau et al., 2017b;Lample et al., 2017). Average word embeddings can further be improved by using concatenation of different power means (Rücklé et al., 2019). A straightforward approach for creating crosslingual sentence embeddings is to use a bag-ofwords representation of cross-lingual word embeddings. However, Conneau et al. (2018) showed that this approach works poorly in practical cross-lingual transfer settings. LASER (Artetxe and Schwenk, 2018) uses a sequence-to-sequence encoder-decoder architecture (Sutskever et al., 2014) based on LSTM networks. It trains on parallel corpora akin to multilingual neural machine translation (Johnson et al., 2017). To create a fixed sized sentence representation, they apply maxpooling over the output of the encoder. LASER was trained for 93 languages on 16 NVIDIA V100 GPUs for about 5 days. In contrast, our models are trained on a single V100 GPU. The bilingual models are trained for about 4-8 hours, the multilingual model for about 2 days.
Multilingual Universal Sentence Encoder (mUSE) 10 (Chidambaram et al., 2018;Yang et al., 2019) is based on a dual-encoder architecture and uses either a CNN network or a transformer network. It was trained in a multi-task setup on SNLI (Bowman et al., 2015) and over 1 Billion crawled question-answer pairs from various communities. To make vector spaces aligned for different languages, they applied a translation ranking task: Given a sentence in the source language and a set of sentences in the target languages, identify the correct translation pair. To work well, hard negative examples (similar, but incorrect translations) must be included in the tranking task. mUSE was trained for 16 languages with 30 million steps.
In this publication, we extended Sentence-BERT (SBERT) (Reimers and Gurevych, 2019). 10 https://tfhub.dev/google/ universal-sentence-encoder SBERT is based on transformer models like BERT (Devlin et al., 2018) and fine-tunes those using a siamese network structure to create a sentence vector space with desired properties. By using the pre-trained weights from BERT, suitable sentence embeddings methods can be trained efficiently. Multilingual BERT (mBERT) was trained on 104 languages using Wikipedia, while XLM-R (Conneau et al., 2019) was trained on 100 languages using CommonCrawl. mBERT and XLM-R were not trained on any parallel data, hence, their vector spaces are not aligned: A sentence in different languages will be mapped to different points in vector space when these approaches are used out-of-thebox.

Conclusion
In this publication, we presented a method to make a monolingual sentence embeddings method multilingual with aligned vector spaces between the languages. This was achieved by using multilingual knowledge distillation: Given parallel data (s i , t i ) and a teacher model M , we train a student modelM such thatM (s i ) ≈ M (s i ) and We demonstrated that this approach successfully transfers properties from the source language vector space (in our case English) to various target languages. Models can be extended to multiple languages in the same training process. The approach can also be applied to multilingual teacher models M to extend those to further languages.
This stepwise training approach has the advantage that an embedding model with desired properties, for example for clustering, can first be created for a high-resource language. Then, in an independent step, it can be extended to support further languages. This decoupling significantly simplifies the training procedure compared to previous approaches.