mT5: A massively multilingual pre-trained text-to-text transformer

The recent"Text-to-Text Transfer Transformer"(T5) leveraged a unified text-to-text format and scale to attain state-of-the-art results on a wide variety of English-language NLP tasks. In this paper, we introduce mT5, a multilingual variant of T5 that was pre-trained on a new Common Crawl-based dataset covering 101 languages. We describe the design and modified training of mT5 and demonstrate its state-of-the-art performance on many multilingual benchmarks. All of the code and model checkpoints used in this work are publicly available.


Introduction
Current natural language processing (NLP) pipelines often make use of transfer learning, where a model is pre-trained on a data-rich task before being finetuned on a downstream task of interest (Ruder et al., 2019). The success of this paradigm is partially thanks to the release of parameter checkpoints for pre-trained models. These checkpoints allow members of the NLP community to quickly attain strong performance on many tasks without needing to perform expensive pre-training themselves. As one example, the pre-trained checkpoints for the "Text-to-Text Transfer Transformer" (T5) model released by Raffel et al. (2019) have been used to achieve stateof-the-art results on many benchmarks (Khashabi et al., 2020;Roberts et al., 2020;Kale, 2020;Izacard and Grave, 2020;Nogueira et al., 2020;Narang et al., 2020, etc.).
Unfortunately, many of these language models were pre-trained solely on English-language text. This significantly limits their use given that roughly 80% of the world population does not speak English (Crystal, 2008). One way the community has addressed this English-centricity has been to release dozens of models that have instead been pre-trained on a single non-English language (Carmo et al., 2020;de Vries et al., 2019;Le et al., 2019;Martin et al., 2019;Delobelle et al., 2020;Malmsten et al., 2020;Nguyen and Nguyen, 2020;Polignano et al., 2019, etc.). A more general solution is to produce multilingual models that have been pre-trained on a mixture of many languages. Popular models of this type are mBERT (Devlin, 2018), mBART (Liu et al., 2020), and XLM-R (Conneau et al., 2019), which are multilingual variants of BERT (Devlin et al., 2018), BART (Lewis et al., 2019a), and RoBERTa (Liu et al., 2019), respectively. In this paper, we continue this tradition by releasing mT5, a multilingual variant of T5. Our goal with mT5 is to produce a massively multilingual model that deviates as little as possible from the recipe used to create T5. As such, mT5 inherits all of the benefits of T5 (described in section 2), such as its general-purpose text-to-text format, its design based on insights from a large-scale empirical study, and its scale. To train mT5, we introduce a multilingual variant of the C4 dataset called mC4. mC4 comprises natural text in 101 languages drawn from the public Common Crawl web scrape. To validate the performance of mT5, we include results on several benchmark datasets, showing state-of-the-art performance in many cases. We release our pre-trained models and code so that the community can leverage our work.

Background on T5 and C4
In this section, we provide a short overview of T5 and the C4 pre-training dataset. Further details are available in Raffel et al. (2019). T5 is a pre-trained language model whose primary distinction is its use of a unified "text-to-text" format for all text-based NLP problems. This approach is natural for generative tasks (such as machine translation or abstractive summarization) where the task format requires the model to generate text conditioned on some input. It is more unusual for classification tasks, where T5 is trained to output the literal text of the label (e.g. "positive" or "negative" for sentiment analysis) instead of a class index. The primary advantage of this approach is that it allows the use of exactly the same training objective (teacher-forced maximum-likelihood) for every task, which in practice means that a single set of hyperparameters can be used for effective fine-tuning on any downstream task. Similar unifying frameworks were proposed by Keskar et al. (2019) andMcCann et al. (2018). Given the sequence-to-sequence structure of this task format, T5 uses a basic encoder-decoder Transformer architecture as originally proposed by Vaswani et al. (2017). T5 is pre-trained on a masked language modeling "span-corruption" objective, where consecutive spans of input tokens are replaced with a mask token and the model is trained to reconstruct the masked-out tokens.
An additional distinguishing factor of T5 is its scale, with pre-trained model sizes available from 60 million to 11 billion parameters. These models were pre-trained on around 1 trillion tokens of data. Unlabeled data comes from the C4 dataset, which is a collection of about 750GB of English-language text sourced from the public Common Crawl web scrape. C4 includes heuristics to extract only natural language (as opposed to boilerplate and other gibberish) in addition to extensive deduplication. The pre-training objective, model architecture, scaling strategy, and many other design choices for T5 were chosen based on a large-scale empirical study described in detail in Raffel et al. (2019).

mC4 and mT5
Our goal in this paper is to create a massively multilingual model that follows T5's recipe as closely as possible. Towards this end, we develop an extended version of the C4 pre-training dataset that covers 101 languages and integrates changes into T5 to better suit this multilinguality.

mC4
The C4 dataset was explicitly designed to be English only: any page that was not given a probability of at least 99% of being English by langdetect 2 was discarded. In contrast, for mC4 we use cld3 3 to identify over 100 languages. Since some of these languages are relatively scarce on the internet, we make use of all of the 71 monthly web scrapes released so far by Common Crawl. This is dramatically more source data than was used for C4, for which the April 2019 web scrape alone was enough to provide plenty of English-language data.
An important heuristic filtering step in C4 was the removal of lines that did not end in an English terminal punctuation mark. As this is inappropriate for many languages, we instead apply a "line length filter" that requires pages to contain at least three lines of text with 200 or more characters. Otherwise, we follow C4's filtering by deduplicating lines across documents and filtering pages containing bad words. 4 Finally, we detect each page's primary language using cld3 and remove pages where the confidence is below 70%.
After these filters are applied, we group the remaining pages by language and include in the corpus  all languages with 10,000 or more pages. This produces text in 107 "languages" as defined by cld3. However, we note that six of these are just script variants of the same spoken language (e.g. ru is Russian in Cyrillic script and ru-Latn is Russian in Latin script). A histogram of the page counts for each language is shown in fig. 1. Detailed dataset statistics including per-language token counts are shown in table 5 (appendix).

mT5
The model architecture and training procedure that we used for mT5 closely follows that of T5. Specifically, we based mT5 on the "T5.1.1" recipe, 5 which improves upon T5 by using GeGLU nonlinearities (Shazeer, 2020), scaling d model instead of d ff in the larger models, and pre-training on unlabeled data only with no dropout. For brevity, we refer to Raffel et al. (2019) for further details on T5. A major factor in pre-training multilingual models is how to sample data from each language. Ultimately, this choice is a zero-sum game: If lowresource languages are sampled too often, the model may overfit; if high-resource languages are not trained on enough, the model will underfit. We therefore take the approach used in (Devlin, 2018;Conneau et al., 2019;Arivazhagan et al., 2019) and boost lower-resource languages by sampling examples according to the probability p(L) ∝ |L| α , where p(L) is the probability of sampling text from a given language during pre-training and |L| is the number of examples in the language. The hyperparameter α (typically with α < 1) allows us to control how much to "boost" the probability of training on low-resource . We tried all three of these values and found α = 0.3 to give a reasonable compromise between performance on high-and low-resource languages.
The fact that our model covers over 100 languages necessitates a larger vocabulary. Following XLM-R (Conneau et al., 2018), we increase the vocabulary size to 250,000 wordpieces. As in T5, we use Senten-cePiece (Kudo and Richardson, 2018;Kudo, 2018) wordpiece models that are trained with the same language sampling rates used during training. To accommodate languages with large character sets like Chinese, we use a character coverage of 0.99999, but also enable SentencePiece's "byte-fallback" feature to ensure that any string can be uniquely encoded.

Comparison to related models
To contextualize our new model, we provide a brief comparison with existing massively multilingual pretrained language models. For brevity, we focus on models that support more than a few dozen languages. Table 1 gives a high-level comparison of mT5 to the most similar models.
mBERT (Devlin, 2018) is a multilingual version of BERT (Devlin et al., 2018). Similar to our approach with mT5, mBERT follows the BERT recipe as closely as possible (same architecture, objective, etc.

Experiments
To validate the performance of mT5, we evaluate our models on 6 tasks from the xtreme multilingual benchmark ( We cast all tasks into the text-to-text format, i.e. generating the label text (XNLI and PAWS-X), entity tags and labels (WikiAnn NER), or answer (XQuAD, MLQA, and TyDi QA) directly. For NER, if there are multiple entities then they are concatenated in the order they appear, and if there are no entities then the target text is 'None'. We consider variants of these tasks where the model is fine-tuned only on English data ("zero-shot") or on data that has been machinetranslated from English into each target language ("translate-train"). For brevity, we refer to Hu et al.
(2020) for further details on these benchmarks. Following the original T5 recipe, we consider five model sizes: Small (≈ 300M parameters), Base (600M), Large (1B), XL (4B), and XXL (13B). The increase in parameter counts compared to the corresponding T5 model variants comes from the larger vocabulary used in mT5. We pre-train our models for 1 million steps on batches of 1024 length-1024 input sequences, corresponding to roughly 1 trillion input tokens total. This is the same amount of pretraining as T5 and about 1 6 as much as XLM-R. Due to time constraints, we report results with mt5-XXL trained for only 750 thousand steps. Final results and further experiments will be updated on our public codebase. 1 We use the same inverse square-root learning rate schedule used by T5 during pre-training, with the learning rate set to 1 max(n, k) where n is the current training iteration and k = 10 4 is the number of warm-up steps. Following the T5.1.1 recipe, we do not apply dropout during pre-training. We use the same self-supervised objective as T5, with 15% of tokens masked and an average noise span length of 3. We ablate some of these experimental details in section 4.2.
For fine-tuning, we use a constant learning rate of 0.001 and dropout rate of 0.1 for all tasks. In the zero-shot setting, we use a batch size of 2 20 for XNLI and 2 16 for PAWS-X, NER, XQuAD, MLQA, and TyDi QA. For early stopping, we save checkpoints every 200 steps and choose the checkpoint with the highest performance on the validation set.

Results
Table 2 presents our main results, with per-language breakdowns for each task given in tables 6 to 11 (appendix). Our largest model mT5-XXL reaches stateof-the-art on all of the tasks we consider. Note that unlike our model, InfoXLM (Chi et al., 2020) benefits from parallel training data, while X-STILTs (Phang et al., 2020) leverages labeled data from tasks similar to the target task. Overall, our results highlight the importance of model capacity in cross-lingual representation learning and suggest that scaling up a simple pre-training recipe can be a viable alternative to more complex techniques relying on LM filtering, parallel data, or intermediate tasks.
In the "translate-train" setting, we also match or exceed state-of-the-art on all xtreme classification and QA tasks. For these tasks, we fine-tune on the combination of the labeled English data and machine translations thereof. 6 This allows direct comparison with both Filter (Fang et al., 2020) as well as the XLM-R baseline of Fang et al. (2020). Note however that this setup differs from the xtreme "translatetrain" (Hu et al., 2020), which excludes the English data.  Massively multilingual models have been observed to underperform on a given language when compared to a similarly-sized "dedicated" model trained specifically for that language (Arivazhagan et al., 2019). To quantify this effect, we compare the performance of mT5 and T5 when fine-tuned on the SQuAD reading comprehension benchmark (Rajpurkar et al., 2016). The results are shown in table 3, with results for T5 reproduced from Raffel et al. (2019). While the Small and Base mT5 models fall short of their English T5 counterparts, we find that the larger models close the gap. This suggests there may be a turning point past which the model has enough capacity to effectively learn 101 languages without significant interference effects.

Ablation
We run six ablations, modifying various settings, using our Large model as a baseline: (i) increase dropout to 0.1 in hopes of mitigating overfitting to low-resource languages, (ii) decrease sequence length to 512 as was used in T5, (iii) increase the average noise span length in the pre-training objective to 10 since we observe fewer characters per token than T5, (iv) adjust language sampling exponent α to {0.2, 0.7} as used in MMNMT (Arivazhagan et al., 2019) and mBERT (Devlin, 2018), respectively, (v) turn off "line length filter" in the mC4 data pipeline, and (vi) supplement mC4 with Wikipedia data 7 from 103 languages.
The effect of these ablations on XNLI zero-shot accuracy is shown in table 4. In each case, the average XNLI score is lower than the mT5-Large baseline, justifying our chosen settings.

Conclusion
In this paper, we introduced mT5 and mC4: massively multilingual variants of the T5 model and C4 dataset. We demonstrated that the T5 recipe is straightforwardly applicable to the multilingual setting, and achieve strong performance on a diverse set of benchmarks. We release all of the code and pretrained datasets used in this paper to facilitate future work on multilingual language understanding. 8  Table 5: Statistics of the mC4 corpus, totaling 6.6B pages and 6.3T tokens. The "mT5" column indicates the percentage of mT5 training data coming from a given language, using the default exponential smoothing value of α=0.3. We list 107 "languages" as detected by cld3, but note six of these (marked "Latin") are just Romanized variants of existing languages.