SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization

Transfer learning has fundamentally changed the landscape of natural language processing (NLP). Many state-of-the-art models are first pre-trained on a large text corpus and then fine-tuned on downstream tasks. However, due to limited data resources from downstream tasks and the extremely high complexity of pre-trained models, aggressive fine-tuning often causes the fine-tuned model to overfit the training data of downstream tasks and fail to generalize to unseen data. To address such an issue in a principled manner, we propose a new learning framework for robust and efficient fine-tuning for pre-trained models to attain better generalization performance. The proposed framework contains two important ingredients: 1. Smoothness-inducing regularization, which effectively manages the complexity of the model; 2. Bregman proximal point optimization, which is an instance of trust-region methods and can prevent aggressive updating. Our experiments show that the proposed framework achieves new state-of-the-art performance on a number of NLP tasks including GLUE, SNLI, SciTail and ANLI. Moreover, it also outperforms the state-of-the-art T5 model, which is the largest pre-trained model containing 11 billion parameters, on GLUE.


Introduction
The success of natural language processing (NLP) techniques relies on huge amounts of labeled data in many applications. However, large amounts of labeled data are usually prohibitive or expensive to obtain. To address this issue, researchers have resorted to transfer learning.
Transfer learning considers the scenario, where we have limited labeled data from the target domain for a certain task, but we have relevant tasks with a large amount of data from different domains (also known as out-of-domain data). The goal is to transfer the knowledge from the highresource domains to the low-resource target domain. Here we are particularly interested in the popular two-stage transfer learning framework (Pan and Yang, 2009). The first stage is pre-training, where a high-capacity model is trained for the out-of-domain high-resource relevant tasks. The Work was done during Haoming Jiang's internship at Microsoft Dynamics 365 AI. Haoming Jiang and Tuo Zhao are affiliated with Georgia Institute of Technology. Pengcheng He and Weizhu Chen are affiliated with Microsoft Dynamics 365 AI. Xiaodong Liu and Jianfeng Gao are affiliated with Microsoft Research. Emails: jianghm@gatech.edu, {penhe,wzchen}@microsoft.com, {xiaodl,jfgao}@microsoft.com, tourzhao@gatech.edu. 1 https://github.com/namisan/mt-dnn second stage is fine-tuning, where the high-capacity model is adapted to the low-resource task in the target domain. For many applications in NLP, most popular transfer learning methods choose to pre-train a large language model, e.g., ELMo (Peters et al., 2018), GPT (Radford et al., 2019) and BERT (Devlin et al., 2019). Such a language model can capture general semantic and syntactic information that can be further used in downstream NLP tasks. The language model is particularly attractive, because it can be trained in a completely unsupervised manner with huge amount of unlabeled data, which are extremely cheap to fetch from internet nowadays. The resulting extremely large multi-domain text corpus allows us to train huge language models. To the best of our knowledge, by far the largest language model, T5, has an enormous size of about 11 billion parameters (Raffel et al., 2019).
For the second fine-tuning stage, researchers adapt the pre-trained language model to the target task/domain. They usually replace the top layer of the language model by a task/domainspecific sub-network, and then continue to train the new model with the limited data of the target task/domain. Such a fine-tuning approach accounts for the low-resource issue in the target task/domain, and has achieved state-of-the-art performance in many popular NLP benchmarks (Devlin et al., 2019;Liu et al., 2019c;Yang et al., 2019;Lan et al., 2019;Dong et al., 2019;Raffel et al., 2019).
Due to the limited data from the target task/domain and the extremely high complexity of the pre-trained model, aggressive fine-tuning often makes the adapted model overfit the training data of the target task/domain and therefore does not generalize well to unseen data. To mitigate this issue, the fine-tuning methods often rely on hyper-parameter tuning heuristics. For example, Howard and Ruder (2018) use a heuristic learning rate schedule and gradually unfreeze the layers of the language model to improve the fine-tune performance; Peters et al. (2019) give a different suggestion that they only adapt certain layers and freeze the others; Houlsby et al. (2019); Stickland and Murray (2019) propose to add additional layers to the pre-trained model and fine-tune both of them or only the additional layers. However, these methods require significant tuning efforts.
To fully harness the power of fine-tuning in a more principled manner, we propose a new learning framework for robust and efficient fine-tuning on the pre-trained language models through regularized optimization techniques. Specifically, our framework consists of two important ingredients for preventing overfitting: (I) To effectively control the extremely high complexity of the model, we propose a Smoothnessinducing Adversarial Regularization technique. Our proposed regularization is motivated by local shift sensitivity in existing literature on robust statistics. Such regularization encourages the output of the model not to change much, when injecting a small perturbation to the input. Therefore, it enforces the smoothness of the model, and effectively controls its capacity (Mohri et al., 2018). (II) To prevent aggressive updating, we propose a class of Bregman Proximal Point Optimization methods. Our proposed optimization methods introduce a trust-region-type regularization (Conn et al., 2000) at each iteration, and then update the model only within a small neighborhood of the previous iterate. Therefore, they can effectively prevent aggressive updating and stabilize the fine-tuning process.
We compare our proposed method with several state-of-the-art competitors proposed in Zhu et al. (2020); Liu et al. (2019b,c); Lan et al. (2019); Raffel et al. (2019) and show that our proposed method significantly improves the training stability and generalization, and achieves comparable or better performance on multiple NLP tasks. We highlight that our single model with 356M parameters (without any ensemble) can achieve three state-of-the-art results on GLUE, even compared with all existing ensemble models and the T5 model (Raffel et al., 2019), which contains 11 billion parameters. Furthermore, we also demonstrate that the proposed framework complements with SOTA fine-tuning methods (Liu et al., 2019b) and outperforms the T5 model.
We summarize our contribution as follows: 1. We introduce the smoothness-inducing adversarial regularization and proximal point optimization into large scale language model fine-tuning; 2. We achieve state-of-the-art results on several popular NLP benchmarks (e.g., GLUE, SNLI, SciTail, and ANLI). Notation: We use f (x; θ) to denote a mapping f associated with the parameter θ from input sentences x to an output space, where the output is a multi-dimensional probability simplex for classification tasks and a scalar for regression tasks. Π A denotes the projection operator to the set A. D KL (P ||Q) = k p k log(p k /q k ) denotes the KL-divergence of two discrete distributions P and Q with the associated parameters of p k and q k , respectively.

Background
The transformer models were originally proposed in Vaswani et al. (2017) for neural machine translation. Their superior performance motivated Devlin et al. (2019) to propose a bidirectional transformer-based language model named BERT. Specifically, Devlin et al. (2019) pre-trained the BERT model using a large corpus without any human annotation through unsupervised learning tasks. BERT motivated many follow-up works to further improve the pre-training by introducing new unsupervised learning tasks Dong et al., 2019;Joshi et al., 2020), enlarging model size (Lan et al., 2019;Raffel et al., 2019), enlarging training corpora (Liu et al., 2019c;Yang et al., 2019;Raffel et al., 2019) and multi-tasking (Liu et al., 2019a,b).
The pre-trained language model is then adapted to downstream tasks and further fine-tuned. Specifically, the top layer of the language model can be replaced by a task-specific layer and then continue to train on downstream tasks. To prevent overfitting, existing heuristics include choosing a small learning rate or a triangular learning rate schedule, and a small number of iterations, and other fine-tuning tricks mentioned in Howard and Ruder (2018) Our proposed regularization technique is related to several existing works (Miyato et al., 2018;Zhang et al., 2019;Shu et al., 2018). These works consider similar regularization techniques, but target at other applications with different motivations, e.g., semi-supervised learning, unsupervised domain adaptation and harnessing adversarial examples in image classification.
There is a related fine-tuning method -FreeLB (Zhu et al., 2020), which adapted a robust adversarial training method. However, our framework focuses on the local smoothness, leading to a significant performance improvement. More discussion and comparison are provided in Section 4.

The Proposed Method
We describe the proposed learning framework -SMART for robust and efficient fine-tuning of pre-trained language models. Our framework consists of two important ingredients: SMoothnessinducing Adversarial Regularization and BRegman pRoximal poinT opTimization 2 .

Smoothness-Inducing Adversarial Regularization
We propose to impose an explicit regularization to effectively control the model complexity at the fine-tuning stage. Specifically, given the model f (·; θ) and n data points of the target task denoted by , where x i 's denote the embedding of the input sentences obtained from the first embedding layer of the language model and y i 's are the associated labels, our method essentially solves the following optimization for fine-tuning: where L(θ) is the loss function defined as and (·, ·) is the loss function depending on the target task, λ s > 0 is a tuning parameter, and R s (θ) is the smoothness-inducing adversarial regularizer. Here we define R s (θ) as where > 0 is a tuning parameter. Note that for classification tasks, f (·; θ) outputs a probability simplex and s is chosen as the symmetrized KL-divergence, i.e., For regression tasks, f (·; θ) outputs a scalar and s is chosen as the squared loss, i.e., s (p, q) = (p−q) 2 . Note that the computation of R s (θ) involves a maximization problem and can be solved efficiently by projected gradient ascent. We remark that the proposed smoothness-inducing adversarial regularizer was first used in Miyato et al. (2018) for semi-supervised learning with p = 2, and then in Shu et al. (2018) for unsupervised domain adaptation with p = 2, and more recently in Zhang et al. (2019) for harnessing the adversarial examples in image classification with p = ∞. To the best of our knowledge, we are the first applying such a regularizer to fine-tuning of pre-trained language models.
The smoothness-inducing adversarial regularizer is essentially measuring the local Lipschitz continuity of f under the metric s . More precisely speaking, the output of f does not change much if we inject a small perturbation ( p norm bounded by ) to x i . Therefore, by minimizing the objective in (1), we can encourage f to be smooth within the neighborhoods of all x i 's. Such a smoothness-inducing property is particularly helpful to prevent overfitting and improve generalization on a low resource target domain for a certain task. An illustration is provided in Figure  1.
Note that the idea of measuring the local Lipschitz continuity is similar to the local shift sensitivity criterion in existing literature on robust statistics, which dates back to 1960's (Hampel, 1974;Huber, 2011). This criterion has been used to characterize the dependence of an estimator on the value of one of the sample points.

Bregman Proximal Point Optimization
We propose to develop a class of Bregman proximal point optimization methods to solve (1). Such optimization methods impose a strong penalty at each iteration to prevent the model from aggressive update. Specifically, we use a pre-trained model as the initialization denoted by f (·; θ 0 ). At the (t + 1)-th iteration, the vanilla Bregman proximal point (VBPP) method takes where µ > 0 is a tuning parameter, and D Breg (·, ·) is the Bregman divergence defined as where s is defined in Section 3.1. As can be seen, when µ is large, the Bregman divergence at each iteration of the VBPP method essentially serves as a strong regularizer and prevents θ t+1 from deviating too much from the previous iterate θ t . This is also known as the trust-region type iteration in existing optimization literature (Conn et al., 2000). Consequently, the Bregman proximal point method can effectively retain the knowledge of the out-of-domain data in the pretrained model f (·; θ 0 ). Since each subproblem (2) of VBPP does not admit a closed-form solution, we need to solve it using SGD-type algorithms such as ADAM. Note that we do not need to solve each subproblem until convergence. A small number of iterations are sufficient to output a reliable initial solution for solving the next subproblem. Moreover, the Bregman proximal point method is capable of adapting to the information geometry (See more details in Raskutti and Mukherjee (2015)) of machine learning models and achieving better computational performance than the standard proximal point method (i.e., D Breg (θ, θ t ) = θ − θ t 2 2 ) in many applications. Acceleration by Momentum. Similar to other optimization methods in existing literature, we can accelerate the Bregman proximal point method by introducing an additional momentum to the update. Specifically, at the (t + 1)-th iteration, the momentum Bregman proximal point (MBPP) method takes where θ t = (1 − β)θ t + β θ t−1 is the exponential moving average and β ∈ (0, 1) is the momentum parameter. The MBPP method is also called the "Mean Teacher" method in existing literature (Tarvainen and Valpola, 2017) and has been shown to achieve state-of-the-art performance in popular semi-supervised learning benchmarks. For convenience, we summarize the MBPP method in Algorithm 1.

Experiment -Main Results
We demonstrate the effectiveness of SMART for fine-tuning large language models using GLUE Wang et al. (2018) by comparing with existing state-of-the-art methods. Dataset details can be found in Appendix 7.

Implementation Details
Our implementation of SMART is based on BERT 3 (Wolf et al., 2019), RoBERTa 4 (Liu et al., 2019c), MT-DNN 5 (Liu et al., 2020b) and HNN 6 . We used ADAM (Kingma and Ba, 2014) and RADAM (Liu et al., 2020a) as our optimizers with a learning rate in the range ∈ {1×10 −5 , 2×10 −5 , 3×10 −5 , 5×10 −5 } and a batch size ∈ {16, 32, 64}. The maximum number of epochs was set to 6. A linear learning rate decay schedule with warm-up of 0.1 was used, unless stated otherwise. We also set the dropout rate of all the task specific layers as 0.1, except 0.3 for MNLI and 0.05 for CoLA. To avoid gradient exploding, we clipped the gradient norm within 1. All the texts were tokenized using wordpieces and were chopped to spans no longer than 512 tokens. For SMART, we set the perturbation size = 10 −5 and σ = 10 −5 . We set µ = 1 and λ s ∈ {1, 3, 5}. The learning rate η in Algorithm 1 is set to 10 −3 . We set β = 0.99 for the first 10% of the updates (t ≤ 0.1T ) and β = 0.999 for the rest of the Algorithm 1 SMART: We use the smoothness-inducing adversarial regularizer with p = ∞ and the momentum Bregman proximal point method.
Notation: For simplicity, we denote g i ( x i ,θ s ) = 1 |B| x i ∈B ∇ x s (f (x i ;θ s ), f ( x i ;θ s )) and AdamUpdate B denotes the ADAM update rule for optimizing (3) using the mini-batch B; Π A denotes the projection to A. Input: T : the total number of iterations, X : the dataset, θ 0 : the parameter of the pre-trained model, S: the total number of iteration for solving (2), σ 2 : the variance of the random initialization for x i 's, T x : the number of iterations for updating x i 's, η: the learning rate for updating x i 's, β: momentum parameter. For all for m = 1, .., T x do 8: updates (t > 0.1T ) following Tarvainen and Valpola (2017). Lastly, we simply set S = 1, T x = 1 in Algorithm 1.

GLUE Main Results
We compare SMART with a range of strong baselines including large pre-trained models and approaches with adversarial training, and a list of state-of-the-art models that have been submitted to the GLUE leaderboard. SMART is a generic framework, we evaluate our framework on two pre-trained models, the BERT BASE model (Devlin et al., 2019) and the RoBERTa LARGE model (Liu et al., 2019c), which are available publicly. Most of our analyses are done with the BERT BASE to make our results comparable to other work, since BERT BASE has been widely used as a baseline. To make our result comparable to other state-of-the-art models, we also evaluate the framework on the RoBERTa LARGE model.
• BERT (Devlin et al., 2019): This is the BERT BASE model released by the authors. In Devlin et al. (2019), authors only reported the development results on a few tasks, thus we reproduced the baseline results, which are denoted by BERT ReImp .
• RoBERTa (Liu et al., 2019c): This is the RoBERTa LARGE released by authors, and we present the reported results on the GLUE dev.
• PGD, FreeAT, FreeLB (Zhu et al., 2020): They are three adversarial training approaches built on top of the RoBERTa LARGE .
• SMART: our proposed method as described in section 3. We use both the BERT BASE model (SMART BERT ) and the RoBERTa LARGE model (SMART RoBERTa ) as the pretrained model to evaluate the effectiveness of SMART.
The main results are reported in Table 1. This table can be clustered into two groups based on different pretrained models: the BERT BASE model (the first group) and the RoBERTa LARGE model (the second group). The detailed discussions are as follows.
For a fair comparison, we reproduced the BERT baseline (BERT ReImp ), since several results on the GLUE development set were missed. Our reimplemented BERT baseline is even stronger than the originally reported results in Devlin et al. (2019)    We also compare SMART with a range of models which used adversarial training such as FreeLB. From the bottom rows in Table 1, SMART outperforms PGD and FreeAT across the all 8 GLUE tasks. Comparing with the current state-of-the-art adversarial training model, FreeLB, SMART outperforms it on 6 GLUE tasks out of a total of 8 tasks (MNLI, RTE, QNLI, MRPC, SST-2 and STS-B) showing the effectiveness of our model. Table 2 summarizes the current state-of-the-art models on the GLUE leaderboard. SMART obtains a competitive result comparing with T5 (Raffel et al., 2019), which is the leading model at the GLUE leaderboard. T5 has 11 billion parameters, while SMART only has 356 millions. Among this super large model (T5) and other ensemble models (e.g., ALBERT, ALICE), SMART, which is a single model, still sets new state-of-the-art results on SST-2, MRPC and STS-B. By combining with the Multi-task Learning framework (MT-DNN), MT-DNN-SMART obtains new state-of-the-art on GLUE, pushing the GLUE benchmark to 89.9%. More discussion will be provided in Section 5.3.

Experiment -Analysis and Extension
In this section, we first analyze the effectiveness of each component of the proposed method. We also study that whether the proposed method is complimentary to multi-task learning. We further extend SMART to domain adaptation and use both SNLI (Bowman et al., 2015) and SciTail (Khot et al., 2018) to evaluate the effectiveness. Finally, we verified the robustness of the proposed method on ANLI (Nie et al., 2019).

Ablation Study
Note that due to the limitation of time and computational resources, all the experiments reported below are based on the BERT BASE model. In this section, we study the importance of each component of SMART: smoothness-inducing adversarial regularization and Bregman proximal point optimization. All models in this study used the BERT BASE as the encoder for fast training. Furthermore, we also include the BERT BASE model as an additional baseline for a fair comparison. SMART denotes the proposed model. Then we set λ s to 0, which denotes as -R s . The model with µ = 0 is noted as -D Breg .  Table 3: Ablation study of SMART on 5 GLUE tasks. Note that all models used the BERT BASE model as their encoder.
The results are reported in Table 3. It is expected that the removal of either component (smooth regularization or proximal point method) in SMART would result in a performance drop. For example, on MNLI, removing smooth regularization leads to a 0.8% (85.6% vs. 84.8) performance drop, while removing the Breg proximal point optimization, results in a performance drop of 0.2% (85.6% vs. 85.4%). It demonstrates that these two components complement each other. Interestingly, all three proposed models outperform the BERT baseline model demonstrating the effectiveness of each module. Moreover, we obersere that the generalization performance benefits more from SMART on small datasets (i.e., RTE and MRPC) by preventing overfitting.

Error Analysis
To understand why SMART improves the performance, we analyze it on the ambiguous samples of MNLI dev set containing 3 classes, where each sample has 5 annotations. Based on the degree of agreement between these annotations, we divide the samples into 4 categories: 1) 5/0/0 all five annotations are the same; 2) 4/1/0 four annotations are the same; 3) 3/2/0 three annotations are the same and the other two annotations are the same; 4) 3/1/1 three annotations are the same and the other two annotations are different. Figure 2 summarizes the results in terms of both accuracy and KL-divergence: For a given sample x i , the KL-Divergence evaluates the similarity between the model prediction {f j (x i )} 3 j=1 and the annotation distribution {p j (x i )} 3 j=1 . We observe that SMART RoBERTa outperforms RoBERTa across all the settings. Further, on high degree of ambiguity (low degree of agreement), SMART RoBERTa obtains an even larger improvement showing its robustness to ambiguity.

SMART with Multi-task Learning
It has been shown that multi-task learning (MTL, Caruana (1997); Liu et al. (2015Liu et al. ( , 2019b) has a regularization effect via alleviating overfitting to a specific task. One question is whether MTL helps SMART as well. In this section, we are going to answer this question. Following Liu et al. (2019b), we first "pre-trained" shared embeddings using MTL with SMART, denoted as MT-DNN-SMART 8 , and then adapted the training data on each task on top of the shared embeddings. We also include a baseline which fine-tuned each task on the publicly released MT-DNN checkpoint 9 , which is indicated as MT-DNN-SMART v0 .

Model
MNLI RTE QNLI SST MRPC Acc Acc Acc Acc F1  We observe that both MT-DNN and SMART consistently outperform the BERT model on all five GLUE tasks. Furthermore, SMART outperforms MT-DNN on MNLI, QNLI, and MRPC, while it obtains worse results on RTE and SST, showing that MT-DNN is a strong counterpart for SMART. By combining these two models, MT-DNN-SMART v0 enjoys advantages of both and thus improved the final results. For example, it achieves 85.7% (+0.1%) on MNLI and 80.2% (+1.1%) on RTE comparing with the best results of MT-DNN and SMART demonstrating that these two techniques are orthogonal. Lastly we also trained SMART jointly and then finetuned on each task like Liu et al. (2019b). We observe that MT-DNN-SMART outperformes MT-DNN-SMART v0 and MT-DNN across all 5 tasks (except MT-DNN on SST) showing that SMART improves the generalization of MTL.

Domain Adaptation
In this section, we evaluate our model on the domain adaptation setting. Following Liu et al. (2019b), we start with the default training/dev/test set of SNLI and SciTail. Then, we randomly sample 0.1%, 1%, 10% and 100% of its training data, which is used to train a model.
The results are reported in Table 5. We observe that both MT-DNN and MT-DNN-SMART significantly outperform the BERT baseline. Comparing with MT-DNN, MT-DNN-SMART also achieves some improvements indicating the robustness of SMART. Furthermore, MT-DNN-SMART outperforms current state-of-the-art on the SNLI/SciTail test.

Results on SNLI and SciTail
In Table 7, we compare our methods, using all in-domain training data, against several state-of-theart models. We observe that SMART obtains the same improvement on SNLI in the BERT setting. Combining SMART with MT-DNN achieves a significant improvement, e.g., our BASE model even 8 Due to limitation of computational resources, we only trained jointly using MTL on MNLI, RTE, QNLI, SST and MRPC, while MT-DNN was trained on the whole GLUE tasks except CoLA. 9 It is from: https://github.com/namisan/mt-dnn. Note that we did not use the complicated answer module, e.g., SAN (Liu et al., 2018).

Robustness
One important property of the machine learning model is its robustness to adversarial attack. We test our model on an adversarial natural language inference (ANLI) dataset Nie et al. (2019). We evaluate the performance of SMART on each subset (i.e., R1,R2,R3) of ANLI dev and test set. The results are presented in Table 6.

Conclusion
We propose a robust and efficient computation framework, SMART, for fine-tuning large scale pre-trained natural language models in a principled manner. The framework effectively alleviates the overfitting and aggressive updating issues in the fine-tuning stage. SMART includes two important ingredients: 1) smooth-inducing adversarial regularization; 2) Bregman proximal point optimization. Our empirical results suggest that SMART improves the performance on many NLP benchmarks (e.g., GLUE, SNLI, SciTail, ANLI) with the state-of-the-art pre-trained models (e.g., BERT, MT-DNN, RoBERTa). We also demonstrate that the proposed framework is applicable to domain adaptation and results in a significant performance improvement. Our proposed finetuning framework can be generalized to solve other transfer learning problems. We will explore this direction as future work.
come from relevant web sentences retrieved from a large corpus. As a result, these sentences are linguistically challenging and the lexical similarity of premise and hypothesis is often high, thus making SciTail particularly difficult. The dataset is used only for domain adaptation in this study.
• ANLI. The Adversarial Natural Language Inference (ANLI, Nie et al. (2019)) is a new largescale NLI benchmark dataset, collected via an iterative, adversarial human-and-model-in-the-loop procedure. Particular, the data is selected to be difficult to the state-of-the-art models, including BERT and RoBERTa.

Hyperparameters
As for the sensitivities of hyper-parameters, in general the performance of our method is not very sensitive to the choice of hyper-parameters as detailed below.
• The algorithm is not sensitive to σ , any σ ≤ works well.
• p = ∞ makes the size of perturbation constraint to be the same regardless of the number of dimensions. For p = 2, adversarial perturbation is sensitive to the number of dimensions (A higher dimension usually requires a larger perturbation), especially for sentences with different length. As a result, we need to make less tuning effort for p = ∞. For other values of p, the associated projections are computationally inefficient.
• We observed a minor improvement by using a larger S or a larger T x . The minor improvement comes with an increased cost of computation. When S = T x = 1, SMART requires 3 more forward passes and 3 more backward passes per iteration, compared with direct fine-tuning. In practice, it takes about 3 times the original training time. In terms of memory usage, it approximately doubles the GPU memory usage.
• We set β = 0.99 for the first 10% of the updates (t <= 0.1T ) and β = 0.999 for the rest of the updates (t > 0.1T ) following (Tarvainen and Valpola, 2017), which works well in practice.