A Neural Grammatical Error Correction System Built On Better Pre-training and Sequential Transfer Learning

Grammatical error correction can be viewed as a low-resource sequence-to-sequence task, because publicly available parallel corpora are limited.To tackle this challenge, we first generate erroneous versions of large unannotated corpora using a realistic noising function. The resulting parallel corpora are sub-sequently used to pre-train Transformer models. Then, by sequentially applying transfer learning, we adapt these models to the domain and style of the test set. Combined with a context-aware neural spellchecker, our system achieves competitive results in both restricted and low resource tracks in ACL 2019 BEAShared Task. We release all of our code and materials for reproducibility.


Introduction
Grammatical error correction (GEC) is the task of correcting various grammatical errors in text, as illustrated by the following example: [Travel → Travelling] by bus is [exspensive → expensive], [bored → boring] and annoying.
In GEC, unlike NMT between major languages, there are not enough publicly available corpora (GEC's hundreds of thousands to NMT's tens of millions). This motivates the use of pre-training and transfer learning, which has shown to be highly effective in many natural language processing (NLP) scenarios in which there is not enough annotated data, notably in low-resource machine translation (MT) (Lample et al., 2018b;Ruder, 2019). As a result, recent GEC systems also include pre-training on various auxiliary tasks, such as language modeling (LM) (Junczys-Dowmunt et al., 2018), text revision (Lichtarge et al., 2018), and denoising (Zhao et al., 2019).
In this paper, we introduce a neural GEC system that combines the power of pre-training and transfer learning. Our contributions are summarized as follows: • We pre-train our model for the denoising task using a novel noising function, which gives us a parallel corpus that includes realistic grammatical errors; • We leverage the idea of sequential transfer learning (Ruder, 2019), thereby effectively adapting our pre-trained model to the domain as well as the writing and annotation styles suitable for our final task.
• We introduce a context-aware neural spellchecker, which improves upon an off-the-shelf spellchecker by incorporating context into spellchecking using a pre-trained neural language model (LM).

Transformers
Transformers (Vaswani et al., 2017) are powerful deep seq2seq architectures that rely heavily on the attention mechanism (Bahdanau et al., 2015;Luong et al., 2015). Both the encoder and the decoder of a Transformer are stacks of Transformer blocks, each of which consists of a multi-head self-attention layer followed by a position-wise feed-forward layer, along with residual connection (He et al., 2016) and layer normalization (Ba et al., 2016). Each decoder block also attends (Luong et al., 2015) to the encoder outputs, in between its self-attention and feed-forward layers. Each input token embedding in a Transformer is combined with a positional embedding that encodes where the token appeared in the input sequence.

Copy-Augmented Transformers
Copy-augmented Transformers (Zhao et al., 2019) are a class of Transformers that also incorporate an attention-based copying mechanism (Gu et al., 2016;See et al., 2017;Jia and Liang, 2016) in the decoder. For each output token y t at output position t, the output probability distribution of a copy-augmented Transformer is a mixture of the decoder's generative distribution p gen and a copy distribution p copy , which is defined as an encoderdecoder attention layer that assigns a distribution over tokens appearing in the source sentence. By defining a mixture weight parameter α copy t per each decoding step, the output distribution can be compactly represented as follows: p(y t ) = (1 − α copy t ) · p gen (y t ) + α copy t · p copy (y t ) (1) The mixture weight balances between how likely it is for the model to simply copy a source token, rather than generating a possibly different token.

Denoising Autoencoders
Denoising autoencoders (DAEs) (Vincent et al., 2008) are a class of neural networks that learns to reconstruct the original input given its noisy version. Given an input x and a (stochastic) noising function x →x, the encoder-decoder model of a DAE minimizes the reconstruction loss: L(x, dec(enc(x))) where L is some loss function.
Within the NLP domain, DAEs have been for pre-training in seq2seq tasks that can be cast as a denoising task. For example, in GEC, pre-trained DAEs have been used for correcting erroneous sentences (Xie et al., 2018;Zhao et al., 2019). Another example is low-resource machine translation (MT) (Lample et al., 2018b), pre-trained DAEs were used to convert word-by-word translations into natural sentences.
Several prior results, both early (Brockett et al., 2006;Felice and Yuan, 2014) and recent (Ge et al., 2018a;Xie et al., 2018;Zhao et al., 2019), introduced different strategies for generating erroneous text that can in turn be used for model (pre-)training. One major direction is to introduce an additional "back-translation" model (Ge et al., 2018a;Xie et al., 2018), inspired by its success in NMT (Sennrich et al., 2016a), and let this model learn to generate erroneous sentences from correct ones. While these back-translation models can learn naturally occurring grammatical errors from the parallel corpora in reverse, they also require relatively large amounts of parallel corpora, which are not readily available in low resource scenarios. The other direction, which can avoid these issues, is to incorporate a pre-defined noising function, which can generate pre-training data for a denoising task (Zhao et al., 2019). Compared to (Zhao et al., 2019), our work introduces a noising function that generates more realistic grammatical errors.
When pre-training a seq2seq model on an auxiliary denoising task, the choice of the noising function is important. For instance, in low-resource MT, Lample et al. (2018a,b) made use of a noising function that randomly insert/replace/remove tokens or mix up nearby words at uniform probabilities. They showed that this approach is effective in translating naive word-by-word translations into correct ones, both because the coverage of word-to-word dictionaries can be limited and because word order is frequently swapped between languages (e.g., going from SVO to SOV).
In GEC, Zhao et al. (2019) used a similar noising function to generate a pre-training dataset. However, we find that this noising function is less realistic in GEC than in low-resource MT. For example, randomly mixing up nearby words can be less effective for GEC than for low-resource MT, because word order errors occur less frequently than other major error categories, such as missing punctuations and noun numbers. Also, replacing a word to any random word in the vocabulary is a less realistic scenario than only replacing it with its associated common error categories, such as prepositions, noun numbers and verb tenses.
To generate realistic pre-training data, we introduce a novel noising function that captures indomain grammatical errors commonly made by human writers.

Constructing Noising Scenarios
We introduce two kinds of noising scenarios, using a token-based approach and a type-based approach.
In the token-based approach, we make use of extracted human edits from annotated GEC corpora, using automated error annotation toolkits such as ERRANT (Bryant et al., 2017). We first take a subset of the training set, preferably one that contains in-domain sentences with high-quality annotations, and using an error annotation toolkit, we collect all edits that occurred in the parallel corpus as well as how often each edit was made. We then take edits that occur in for at least k times, where k is a pre-defined threshold (we fix k = 4 in our experiments), in order to prevent overfitting to this (possibly small) subset. These extracted edits include errors commonly made by human writers, including missing punctuations (e.g., adding a comma), preposition errors (e.g., of → at), and verb tenses (e.g., has → have). As a result, we obtain an automatically constructed dictionary of common edits made by human annotators on the in-domain training set. Then, we can define a realistic noising scenario by randomly applying these human edits, in reverse, to a grammatically correct sentence.
In the type-based approach, we also make use of a priori knowledge and construct a noising scenario based on token types, including prepositions, nouns, and verbs. For each token type, we define a noising scenario based on commonly made errors associated with that token type, but without changing the type of the original token. In particular, we replace prepositions with other prepositions, nouns with their singular/plural version, and verbs with one of their inflected versions. This introduces another set of realistic noising scenarios, thereby increasing the coverage of the resulting noising function.

Generating Pre-training Data
Our goal is to come up with an error function that introduces grammatical errors that are commonly made by human writers in a specific setting (in this case, personal essays written by English students). Given sets of realistic noising scenarios, we can generate large amounts of erroneous sentences from high-quality English corpora, such as the Project Gutenberg corpus (Lahiri, 2014) and Wikipedia (Merity et al., 2016).
We first check if a token exists in the dictionary of token edits. If it does, a token-based error is generated with the probability of 0.9. Specifically, the token is replaced by one of the associated edits with the probabilities proportional to the frequency of each edit. For example, the token for may be replaced with during, in, four, and also for (coming from a noop edit).
If a token is not processed through the tokenbased scenario, we then examine if it belongs to one of the pre-defined token types: in our case, we use prepositions, nouns, and verbs. If the token belongs to one such type, we then apply the corresponding noising scenario.

Transferring Pre-trained DAE Weights
As discussed in (Zhao et al., 2019), an important benefit of pre-training a DAE is that it provides good initial values for both the encoder and the decoder weights in the seq2seq model. Given a pre-trained DAE, we initialize our seq2seq GEC model using the learned weights of the DAE and train on all available parallel training corpora with smaller learning rates. This model transfer approach (Wang and Zheng, 2015) can be viewed as a (relatively simple) version of sequential transfer learning (Ruder, 2019).

Adaptation by Fine-tuning
As noted in (Junczys-Dowmunt et al., 2018), the distribution of grammatical errors occurring in text can differ across the domain and content of text. For example, a Wikipedia article introducing a historical event may involve more rare words than a personal essay would. The distribution can also be affected significantly by the writer's style and proficiency, as well as the annotator's preferred style of writing (e.g., British vs. American styles, synonymous word choices, and Oxford commas).
In this work, given that the primary source of evaluation are personal essays at various levels of English proficiency -in particular the W&I+LOCNESS dataset (Yannakoudakis et al., 2018) -we adapt our trained models to such characteristics of the test set by fine-tuning the model only on the training portion of W&I, which largely matches the domain of the development and test sets. 2 Similar to our training step in §5.1, we use (even) smaller learning rates. Overall, this sequential transfer learning framework can also be viewed as an alternative to oversampling indomain data sources, as proposed in (Junczys-Dowmunt et al., 2018).

A Context-Aware Neural Spellchecker
Many recent GEC systems include an off-theshelf spellchecker, such as the open-source package enchant (Sakaguchi et al., 2017;Junczys-Dowmunt et al., 2018) and Microsoft's Bing spellchecker (Ge et al., 2018a,b). While the idea of incorporating context into spellchecking has been repeatedly discussed in the literature (Flor and Futagi, 2012;Chollampatt and Ng, 2017), popular open-sourced spellcheckers such as hunspell primarily operate at the word level. This fundamentally limits their capacity, because it is often difficult to find which word is intended for without context. For example, given the input sentence This is an esay about my favorite sport.,

Source
Public? # Sent. # Annot. hunspell invariably suggests easy as its top candidate for esay, which should actually be corrected as essay.
Our spellchecker incorporates context to hunspell using a pre-trained neural language model (LM). Specifically, we re-rank the top candidates suggested by hunspell through feeding each, along with the context, to the neural LM and scoring them.

Experiments
Throughout our experiments, we use fairseq 3 (Ott et al., 2019), a publicly available sequenceto-sequence modeling toolkit based on PyTorch (Paszke et al., 2017). Specifically, we take fairseq-0.6.1 and add our own implementations of a copy-augmented transformer model as well as several GEC-specific auxiliary losses.

Datasets & Setups
In Table 1, we summarize all relevant data sources, their sizes, whether they are public, and the number of annotators.
For pre-training, we use the Gutenberg dataset (Lahiri, 2014), the Tatoeba 4 dataset, and the WikiText-103 dataset (Merity et al., 2016). We learned through initial experiments that the quality of pre-training data is crucial to the final model's performance, because our DAE model assumes §4 that these unannotated corpora contain little grammatical errors. Our choice of corpora is based on both the quality and diversity of text: Guten-

Sequential Transfer Learning Using (copy) Transformers
Post-processing • <unk> edit removal • Re-rank • Error type control berg contains clean novel writings with minimal grammatical errors, Tatoeba contains colloquial sentences used as sample sentences in dictionaries, and WikiText-103 contains "Good" and "Featured" articles from Wikipedia. Our final pretraining data is a collection of 45M (perturbed, correct) sentence pairs based on these datasets, with our noising approach ( §4) applied multiple times to each dataset to approximately balance data from each source (1x Gutenberg, 12x Tatoeba, and 5x WikiText-103). Our default setup is the "Restricted Track" scenario ( §7.5) for the BEA 2019 Shared Task, where we use four data sources: the FCE dataset (Bryant et al., 2019), the Lang-8 dataset 5 (Mizumoto et al., 2011;Tajiri et al., 2012), the NUCLE (v3.3) dataset (Dahlmeier et al., 2013), and the newly released Write & Improve and LOCNESS (W&I+L) datasets (Yannakoudakis et al., 2018). 6 For the "Low Resource Track" ( §7.6), we use a 3:1 traintest random split of the W&I+L development set, keeping the proportions of proficiency levels the same. In both tracks, we report our final results on the W&I+L test set, which contains 5 annotations. Further, because the W&I+L dataset is relatively 5 As in previous results, we remove all duplicates but take multiple annotations (if available) the Lang-8 dataset, leaving only 575K parallel examples. 6 See Appendix B for an exploratory data analysis.
new, we also include results on the CoNLL-2014 (Ng et al., 2014) dataset, with and without using the W&I+L dataset during training ( §7.7). In Table 2, we summarize which datasets were used in each setup.

Pre-processing
As part of pre-processing, we first fix minor tokenization issues in the dataset using regular expressions. We use spaCy v1.9 (Honnibal and Montani, 2017) to make tokenization consistent with the final evaluation module (ERRANT). This tokenized input is then fed to our contextaware neural spellchecker ( §6). For the neural LM, we use a gated convolutional neural network language model (Dauphin et al., 2017) pre-trained on WikiText-103 (Merity et al., 2016).
During spellchecking, we also found it beneficial to fix casing errors within our context-aware spellchecking process. To fix case errors, we extract a list of words used in the capital form much more than their lower-case version (more than 99 times) in WikiText-103 (Merity et al., 2016). We then include a capitalized version of the word as a candidate in the LM re-scoring process if it appears in its capitalized form is in the extracted list of common capital words.
Before feeding spellchecked text into our seq2seq model, we apply byte-pair encoding (BPE) (Sennrich et al., 2016b) using Sentence-Piece (Kudo and Richardson, 2018). We first train a SentencePiece model with 32K vocabulary size on the original Gutenberg corpus, and apply this model to all input text to the model. This allows us to avoid <unk> tokens in most training and validation sets, including the W&I+L development set.

Model & Training Details
Throughout our experiments, we use two variants of the Transformer model: the "vanilla" Transformer (Vaswani et al., 2017)  For each model configuration, we train two independent models using different seeds.
Our model training is a three-stage process, as illustrated in Figure 1: DAE pre-training, training, and fine-tuning, except in Low Resource Track where there is no fine-tuning data (see Table 2). At each step, we train a model until its ERRANT score on the development set reaches convergence, and use the learned weights as initial values for the next step. In all training steps, we used the Adam (Kingma and Ba, 2015) optimizer.
Our final model is an ensemble among the different model configurations and seeds. Among the six (four for Low Resource Track) best models, we greedily search for the best combination, starting with the best-performing single model.

Post-processing
Our post-processing phase involves three steps. First, we find any <unk> tokens found in the original input text, and using ERRANT, we remove any edits associated with the token. Next, since many of the model's corrections can still be unnatural, if not incorrect, we re-rank candidate corrections within each sentence using a pretrained neural LM (Dauphin et al., 2017). Specifically, we remove any combination of up to 7 edits per sentence, and choose the combination that yields the highest LM score. Finally, we noticed that, as in many previous results, our neural system performs well on some error categories (e.g., M:PUNCT) but poorly on others (e.g., R:OTHER). Because ERRANT provides a finegrained analysis of model performance based on error types, we found it beneficial to remove edits belonging to certain categories in which the model performs too poorly. Given our final model, we randomly remove all edits from a subset of (at most N ) categories for repeated steps, and choose to remove the subset of error categories that gave the highest score on the development set.

Restricted Track Results
In Table 3, we summarize our results on Restricted Track. The results illustrate that each step in our approach substantially improves upon the previous model, both on the W&I+L development and test sets. We highlight that our pre-training step with realistic human errors already gets us at a 54.82 F 0.5 score on span-based correction in ER-RANT for the test set, even though we only indirectly used the W&I training set for error extraction and no other parallel corpora. This suggests that pre-training on a denoising task with realistic and common errors can already lead to a decent GEC system. Our final ensemble model is a combination of five independent models -one base model, two large models, and two copy-augmented modelsachieving 69.06 F 0.5 score on the test set.

Low Resource Track Results
In Table 4, we summarize our results on Low Resource Track. Similar to Restricted Track, each step in our approach improves upon the previous model significantly, and despite the lack of parallel data (3K for training, 1K for validation), our pretraining step already gets us at 51.71 F 0.5 score on the test set. Compared to Restricted Track, the only difference in pre-training is that the reverse dictionary for the noising function was constructed using much fewer parallel data (3K), but we see that this amount of parallel data is already enough to get within 3 points of our pre-trained model in Restricted Track.
Our final model is an ensemble of two independent models -one base model and one copy model -achieving 61.47 F 0.5 score on the test set.   were used for the training step and is excluded during evaluation. Pre-processing and post-processing are included in the first step and last steps, respectively.

CoNLL-2014 Results
In Table 5 The results show that our approach is competitive with some of the recent state-of-the-art results that achieve around 56 MaxMatch (M 2 ) scores and further achieves 60+ M 2 score when the W&I+L dataset is used. This illustrates that our approach can also achieve a "near humanlevel performance" (Grundkiewicz and Junczys-Dowmunt, 2018). We also note that the 60.33 M 2 score was obtained by the final ensemble model 7 See Appendix F for a step-by-step training progress. 8 http://nlpprogress.com/english/ grammatical_error_correction.html. from §7.5, which includes a fine-tuning step to the W&I model. This suggests that "overfitting" to the W&I dataset does not necessarily imply a reduced performance on an external dataset such as CoNLL-2014.

Error Analysis
Here, we give an analysis of our model's performance on some of the major ERRANT error categories on the W&I test set. Detailed information is available in Tabel 10. We observe that our model performs well on syntax relevant error types, i.e., subject-verb agreement (VERB:SVA) (84.09 F 0.5 ), noun numbers (NOUN:NUM) (72.19), and prepositions (PREP) (64.27), all of which are included as part of our type-based error generation in the pre-training data ( §4.2). Our model also achieves 77.26 on spelling errors (SPELL) and 75.83 on orthographic errors (ORTH), both of which are improvements made mostly by our context-aware neural spellchecker. Our model also achieves 77.86 on punctuation errors (PUNCT), which happen to be the most common error category in the    W&I+L dataset. This may be due to both our use of extracted errors from the W&I dataset during pre-training and our fine-tuning step. Finally, we find it challenging to match human annotators' "naturalness" edits, such as VERB (26.76), NOUN (41.67), and OTHER (36.53). This is possibly due to the variability in annotation styles and a lack of large training data with multiple human annotations.

Effect of Realistic Error Generation
To see how effective our realistic error based pre-training is, we compare it with (Zhao et al., 2019)'s method. According to them, random insertion, deletion, and substitution occur with the probability of 0.1 at every word, and words are reordered with a certain probability. As seen in Table 6 and 7, our pre-training method outperforms the random based one in both Restricted and Low Resource Tracks by 22.57 and 19.70, respectively. And it remains true for each step of the following transfer learning. The performance gap, however, decreases to 5.3 after training and to 3.2 after finetuning in Restricted Track. On the other hand, the gap in Low Resource Track slightly increases to 20.54 after training. This leads to the conclusion that our pre-training functions as proxy for training, for our generated errors resemble the human errors in the training data more than the random errors do.

Effect of Context-Aware Spellchecking
We further investigate the effects of incorporating context and fixing casing errors to the off-the-shelf hunspell, which we consider as a baseline. We test three spellchecker variants: hunspell, hunspell using a neural LM, and our final spellchecker model. On the original W&I+L test set, our LM-based approach improves upon the ERRANT F0.5 score by 5.07 points, and fixing casing issues further improves this score by 4.02 points. As a result, we obtain 32.69 F0.5 score just by applying our context-aware spellchecker model.

Conclusion & Future Work
We introduced a neural GEC system that leverages pre-training using realistic errors, sequential transfer learning, and context-aware spellchecking with a neural LM. Our system achieved competitive results on the newly released W&I+L dataset in both standard and low-resource settings.  There are several interesting future directions following our work. One is to extend sentencelevel GEC systems to multi-sentence contexts, for example by including the previous sentence, to better cope with complex semantic errors such as collocation. Because the W&I+L dataset is also a collection of (multi-)paragraph essays, adding multi-sentence contexts can improve these GEC systems. Also, to better understand the role of several components existing in modern GEC systems, it is important to examine which components are more necessary than others.

A Copy-Augmented Transformers: Formal Derivation
Copy-augmented Transformers (Zhao et al., 2019) incorporate an attention-based copying mechanism (Gu et al., 2016;See et al., 2017;Jia and Liang, 2016) in the decoder of Transformers. For each output token y t at output position t, given source token sequence x = (x 1 , . . . , x T ), the output probability distribution over token vocabulary V is defined as: p gen (y t | y 1:t−1 ; x) = softmax W gen h dec t where enc denotes the encoder that maps the source token sequence x to a sequence of hidden vectors H enc ∈ R d×T , dec denotes the decoder that takes output tokens at previous time steps along with encoded embeddings and produces a hidden vector h dec t ∈ R d , and W gen ∈ R |V |×d is a learnable linear output layer that maps the hidden vector to a pre-softmax output probabilities ("logits"). We denote the resulting distribution as the (token) generative distribution, denoted as p gen .
A copy attention layer can be defined as an additional (possibly multi-head) attention layer between the encoder outputs and the final-layer hidden vector at the current decoding step. The attention layer yields two outcomes, the layer output o t and the corresponding attention scores s t : The copy distribution is then defined as the attention scores in (6) themselves 9 : p copy (y t | y 1:t−1 ; x) = s t (8) The final output of a copy-augmented Transformer as a mixture of both generative and copy distributions. The mixture weight 10 α copy t is defined at each decoding step as follows: where w alpha ∈ R d is a learnable linear output layer. (For simplicity, we omit the dependencies of all probabilities in (10) on both y 1:t−1 and x.) The mixture weight balances between how likely it is for the model to simply copy a source token, rather than generating a possibly different token.

B Exploratory Data Analysis
B.1 Data Sizes Figure 2 illustrates the number of available parallel corpora (counting multiple annotations) across data sources. Note that the vertical axis is capped at 100K for a better visual comparison among other sources. For the Lang-8 dataset, we count all available (ranging from 1 to 8) annotations for each of 1.04M original sentences. Also note that we only use the subset of Lang-8 whose source and target sentences are different, leaving only 575K sentences instead of 1.11M. Figure 3 illustrates the distribution of sentence lengths and the number of edits per sentence across different data sources. Table 9 includes our permutation test 11 results on the number of edits per sentence, normalized 9 In practice, this involves adding up the copy scores defined for each source token into a |V |-dimensional vector, using commands such as scatter add() in PyTorch. 10 When computing the mixture weight α copy t , Zhao et al. (2019) applies a linear layer to H encs t, wherest are the attention scores in (6) before taking softmax. Our formulation gives essentially the same copying mechanism, while being more compatible to standard Transformer implementations. 11 We used the off-the-shelf mlxtend package to run permutation tests. See http://rasbt.github.io/ mlxtend/user_guide/evaluate/permutation_ test/. Figure 2: Data size per source for all Restricted Track training data. Number includes multiple annotations for Lang-8. Vertical axis is capped at 100K for a better visual comparison among the smaller sources. The three FCE splits (train, dev, test) are collectively used for training, and the three W&I+L splits correspond to three English proficiency levels ("A", "B", "C"). After duplicate removal, only 575K of the Lang-8 parallel corpus are actually used for training. by sentence length (i.e., number of word-level tokens), between training data sources. Using an approximate permutation test with 10k simulations and a significant level of α = 0.05, we find that there is a statistical difference in the normalized edit count per sentence between the W&I training set and each of FCE, NUCLE, and Lang-8. This serves as a preliminary experiment showing how the distribution of grammatical errors can be significantly different across different sources -even when they belong to a roughly similar domain.

C Full Noising Algorithm
Algorithms 1 and 2 detail our noising scenarios. In Table 11, we include the training progress for our result for CoNLL-2014.

D Results on error categories
A noticeable difference between this result and our results for Restricted Track and Low Resource Track is that adaptation via fine-tuning is not necessarily effective here. We hypothesize that this is mostly due to the fact that the training subset to which we fine-tune our model (NUCLE) comes  from a different source than the actual test set (CoNLL-2014) -despite the fact that both datasets have similar domains (personal essays from English students), they can still have many other different characteristics, including the writer's English proficiency and annotation styles.

F Training Details
Our model training is a three-stage process: DAE pre-training, training, and fine-tuning, except in Low Resource Track where there is no fine-tuning data. At each step, we train a model until its ERRANT score on the development set reaches convergence, and use the learned weights as ini-   Table 11: Training progress on CoNLL-2014. No W&I+Locness datasets were used in these results. 'b' and 'c' refer to the base and copy configurations of the Transformer, respectively. Evaluation is done using the MaxMatch (M 2 ) scorer. Pre-processing & postprocessing are included before the first step and after the last step, respectively. tial values for the next step. For pre-training, we used a learning rate of 5 · 10 −4 for the base and copy-augmented Transformers and 10 −3 for the large Transformer. For training, we reset the optimizer and set the learning rate to 10 −4 . For finetuning (if available), we again reset the optimizer and set the learning rate to 5 · 10 −5 . In all training steps, we used the Adam (Kingma and Ba, 2015) optimizer with the inverse square-root schedule and a warmup learning rate of 10 −7 , along with a dropout rate of 0.3.

G.1 Effect of Copying Mechanisms & Ensembles
One of our contributions is to highlight the benefit of ensembling multiple models with diverse characteristics. As shown in Table 3, the final ensemble step involving different types of models was crucial for our model's performance, improving the test score by over 6 F 0.5 points. We first no-  ticed that the copy-augmented Transformer learns to be more conservative -i.e., higher precision but lower recall given similar overall scores -in its edits than the vanilla Transformer, presumably because the model includes an inductive bias that favors copying (i.e., not editing) the input token via its copy attention scores. Table 12 shows this phenomenon for Restricted Track. Given multiple models with diverse characteristics, the choice of models for ensemble can translate to controlling how conservative we want our final model to be. For example, combining one vanilla model with multiple independent copyaugmented models will result in a more conservative model. This could serve as an alternative to other methods that control the precision-recall ratio, such as the edit-weighted loss (Junczys-Dowmunt et al., 2018).