A Multilingual Neural Machine Translation Model for Biomedical Data

We release a multilingual neural machine translation model, which can be used to translate text in the biomedical domain. The model can translate from 5 languages (French, German, Italian, Korean and Spanish) into English. It is trained with large amounts of generic and biomedical data, using domain tags. Our benchmarks show that it performs near state-of-the-art both on news (generic domain) and biomedical test sets, and that it outperforms the existing publicly released models. We believe that this release will help the large-scale multilingual analysis of the digital content of the COVID-19 crisis and of its effects on society, economy, and healthcare policies. We also release a test set of biomedical text for Korean-English. It consists of 758 sentences from official guidelines and recent papers, all about COVID-19.


Motivation
The 2019-2020 coronavirus pandemic has disrupted lives, societies and economies across the globe. Its classification as a pandemic highlights its global impact, touching people of all languages. Digital content of all types (social media, news articles, videos) have focused for many weeks predominantly on the sanitary crisis and its effects on infected people, their families, healthcare workers and the society and economy at large. This calls not only for a large set of tools to help during the pandemic (as evidenced by the submissions to this workshop), but also for tools to help digest and analyze this data after it ends. By analyzing the representation and reaction across countries with different guidelines or global trends, it might be possible to inform policies in prevention of and reaction to future epidemics. Several institutions and groups have already started to take snapshots of the digital content shared during these weeks (Croquet, 2020;Banda et al., 2020).
However, because of its global scale, all this digital content is accessible in a variety of different languages, and most existing NLP tools remain English-centric (Anastasopoulos and Neubig, 2019). In this paper we describe the release of a multilingual neural machine translation model (MNMT) that can be used to translate biomedical text. The model is both multi-domain and multilingual, covering translation from French, German, Spanish, Italian and Korean to English.
Our contributions consist in the release of: • An MNMT model, and benchmark results on standard test sets; • A new biomedical Korean-English test set.
This paper is structured as follows: in Section 2 we overview previous work upon which we build; Section 3 details the model and data settings, and the released test set; and Section 4 compares our model to other public models and to state-of-theart results in academic competitions.
The model can be downloaded at https://github.com/naver/covid19-nmt: the repository consists in a model checkpoint that is compatible with Fairseq , and a script to preprocess the input text.

Related Work
In order to serve its purpose, our model should be able to process multilingual input sentences, and generate tailored translations for COVID-19related sentences. As far as NMT models are concerned, both multilingual and domain-specific sentences are just sequences of plain tokens that should be distinguished internally and handled in a separate manner depending on the multiple languages or domains. Due to this commonality in both fields of MNMT and domain adaptation of NMT models, they can be broadly categorized into two groups: 1) data-centric and 2) modelcentric (Chu and Wang, 2018). The former focuses on the preparation of the training data such as handling and selecting from multi-domain (Kobus et al., 2016;Tars and Fishel, 2018) or multilingual parallel corpora (Aharoni et al., Tan et al., 2019a); and generating synthetic parallel data from monolingual corpora (Sennrich et al., 2015;Edunov et al., 2018).
While the two types of approaches are orthogonal and can be utilized in tandem, our released model is trained using data-centric approaches. One of the frequently used data-centric methods for handling sentences of multiple languages and domains is simply prepending a special token that indicates the target language or domain that the sentence should be translated into (Kobus et al., 2016;Aharoni et al., 2019). By feeding the taskspecific meta-information via the reserved tags, we signal the model to treat the following input tokens accordingly. Recent works show that this method is also applicable to generating diverse translations (Shu et al., 2019) and translations in specific styles (Madaan et al., 2020).
In addition, back-translation of target monolingual or domain-specific sentences is often conducted in order to augment the low-resource data (Edunov et al., 2018;Hu et al., 2019). Afterward, the back-translated data (and existing parallel data) can be filtered (Xu et al., 2019) or treated with varying amount of importance (Wang et al., 2019) using data selection methods.
Backtranslated sentences can be tagged to achieve even better results (Caswell et al., 2019).
While myriads of research works on MNMT and domain adaptation exist, the number of publicly available pre-trained NMT models is still low. For example, Fairseq, a popular sequence-tosequence toolkit maintained by Facebook AI Research, has released ten uni-directional models for translating English, French, German, and Russian sentences. 1 For its widespread usage, we trained 1 https://github.com/pytorch/fairseq/blob/master/examples/translation/README.md our model using this toolkit.
A large number of public MT models are available thanks to OPUS-MT, 2 created by the Helsinki-NLP group. Utilizing the OPUS corpora (Tiedemann, 2012), more than a thousand MT models are trained and released, including several multilingual models which we use to compare with our model.
To the best of our knowledge, we release the first public MNMT model that is capable of producing tailored translations for the biomedical domain.
The COVID-19 pandemic has shown the need for multilingual access to hygiene and safety guidelines and policies (McCulloch, 2020). As an example of crowd-sourced translation, we point out "The COVID Translate Project" 3 which allowed the translation of 75 pages of guidelines for public agents and healthcare workers, from Korean into English in a matter of days. Although our model could assist in furthering such initiatives, we do not recommend relying solely on our model for translating such guidelines, where quality is of uttermost importance. However, the huge amount of digital content created in the last months around the pandemic makes such professional translations of all that content not only infeasible, but sometimes unnecessary depending on the objective. For instance, we believe that the release of this model can unlock the possibility of large-scale translation with the aim of conducting data analysis on the reaction of the media and society on the matter.

Model Settings and Training Data
The model uses a variant of the Transformer Big architecture (Vaswani et al., 2017) with a shallower decoder: 16 attention heads, 6 encoder layers, 3 decoder layers, an embedding size of 1024, and a feed-forward dimension of 8192 in the encoder and 4096 in the decoder.
As all language pairs have English as their target language, no special token for target language was used (language detection can be performed internally by the model).
As the model performs many-to-English translation, its encoder should be able to hold most of the complexity. Thus, we increase the capacity of the encoder by doubling the default size of the feedforward layer as in .
On the other hand, previous works (Clinchant et al., 2019;Kasai et al., 2020) have shown that it is possible to reduce the number of decoder layers without sacrificing much performance, allowing both faster inference, and smaller network size.
During training, regularization was done with a dropout of 0.1 and label smoothing of 0.1. For optimization, we used Adam (Kingma and Ba, 2014) with warm-up, and maximum learning rate of 0.001. The model was trained for 10 epochs and the best checkpoint was selected based on perplexity on the validation set.
As training data, we used the standard openaccessible datasets, including biomedical data whenever available, for example, the "Corona Crisis Corpora" (TAUS, 2020). Following our past success in domain adaptation (Berard et al., 2019), we used domain tokens (Kobus et al., 2016) to differentiate between domains, allowing multidomain translation with a single model. We initially experimented with more tags, and combinations of tags (e.g., medical → patent or medical → political) to allow for more fine-grained control of the resulting translation. The results however were not very conclusive, and often under-performed. An exception worth noting was the case of transcribed data such as TED talks, and OpenSubtitles, which are not the main targets of this work. Therefore, for simplicity, we used only two tags: medical and back-translation. No tag was used with training data that does not belong to one of these two categories.
In addition to biomedical data, we also used back-translated data, although only for Korean, the language with the smallest amount of training data (13.8M sentences). Like Arivazhagan et al. (2019), we used a temperature parameter of 5, to give more chance to Korean. Additionally, the biomedical data was oversampled by a factor of 2. Table 1 details the amount of training sentences used for each language and each domain tag.
As for pre-processing, we cleaned the available data by conducting white-space normalization and NFKC normalization. We filtered noisy sentence pairs based on length (min. 1 token, max. 200), and automatic language identification with langid.py (Lui and Baldwin, 2012).
We trained a lower-cased shared BPE model using SentencePiece (Kudo and Richardson, 2018)  by using 6M random lines for each language (including English). We filtered out single characters with fewer than 20 occurrences from the vocabulary. This results in a shared vocabulary of size 76k. We reduced the English vocabulary size to speed up training and inference, by setting a BPE frequency threshold of 20, which gives a target vocabulary of size 38k. To get the benefits of a shared vocabulary (i.e., tied source/target embeddings) we sorted the source Fairseq dictionary to put the 38k English tokens at the beginning, which lets us easily share the embedding matrix between the encoder and the decoder. 4 The BPE segmentation is followed by inlinecasing (Berard et al., 2019), where each token is lower-cased and directly followed by a special token specifying its case (<T> for title case, <U> for all caps, no token for lower-case). Word-pieces whose original case is undefined (e.g., "MacDonalds") are split again into word-pieces with defined case ("mac" and "donalds").

New Korean-English Test Set
To benchmark the performance on the COVID-19 domain, we built an in-domain test set for Korean-English, as it is the only language pair that is not included in the Corona Crisis Corpora.
The test set contains 758 Korean-English sentence pairs, obtained by having English sentences translated into Korean by four professional Korean translators. We note that any acronym written without its full form in the source sentence is kept the same in the translation unless it is very widely used in general. The English sentences were distributed among the translators with the same guidelines to get consistent tone and manner.
We gathered English sentences from two sources: 1) The official English guidelines and reports from Korea Centers for Disease Control and Prevention (KCDC) 5 under Ministry of Health and Welfare of South Korea (258 sentences); and 2) Abstracts of biomedical papers on SARS-CoV-2 and COVID-19 from arXiv, 6 medRxiv 7 and bioRxiv 8 (500 sentences). The sentences were handpicked, focusing on covering diverse aspects of the pandemic, including safety guidelines, government briefings, clinical tests, and biomedical experimentation.

Benchmarks
We benchmarked the released multilingual models against: 1) reported numbers in the literature, and 2) other publicly released models. We used OPUS-MT, a large collection (1000+) of pre-trained models released by the NLP group at University of Helsinki. Note that these models were trained with much smaller amounts of training data.
We note that the biomedical test sets (Medline) are very small (around 600 lines). We do not report comparison for Spanish-English new-stest2013, as the latest reported numbers are outdated (the best WMT entry achieved 30.4).
Our single model obtains competitive results on "generic" test sets (News and IWSLT), on par with the state of the art. We also obtain strong results on the biomedical test sets. Note the SOTA models were trained to maximize performance in the very specific Medline domain, for which training data is provided. While we included this data in our tagged biomedical data, we did not fine-tune aggressively over it. Table 2 shows the BLEU scores for the Korean-English COVID-19 test set. The results greatly outperform existing public Korean-English models, even more so than with the IWSLT test sets (Table 3).

Conclusion
We describe the release of a multilingual translation model that supports translation in both the * NLE @ WMT19 Robustness Task (Berard et al., 2019) † FAIR @ WMT19 News Task  ‡ Reported results in the WMT19 Biomedical Task (Bawden et al., 2019)    Our aim is to support research studying the international impact that this crisis is causing, at a societal, economical and healthcare level.