On the Transformer Growth for Progressive BERT Training

As the excessive pre-training cost arouses the need to improve efficiency, considerable efforts have been made to train BERT progressively–start from an inferior but low-cost model and gradually increase the computational complexity. Our objective is to help advance the understanding of such Transformer growth and discover principles that guide progressive training. First, we find that similar to network architecture selection, Transformer growth also favors compound scaling. Specifically, while existing methods only conduct network growth in a single dimension, we observe that it is beneficial to use compound growth operators and balance multiple dimensions (e.g., depth, width, and input length of the model). Moreover, we explore alternative growth operators in each dimension via controlled comparison to give practical guidance for operator selection. In light of our analyses, the proposed method CompoundGrow speeds up BERT pre-training by 73.6% and 82.2% for the base and large models respectively while achieving comparable performances.


Introduction
Thanks to the rapid increase of computing power, large-scale pre-training has been breaking the glass ceiling for natural language processing tasks (Liu et al., 2018;Peters et al., 2018;Devlin et al., 2019;Brown et al., 2020). However, with great power comes great challenges: the required excessive computational consumption significantly impedes the efficient iteration of both research exploration and industrial application. To lower the training cost, many attempts have been made to conduct progressive training, which starts from training an inferior but low-cost model, and gradually increases its resource consumption (Gong et al., 2019;Devlin et al., 2019). As elaborated in Section 5, two components are typically needed for designing such progressive training algorithms-the growth scheduler and the growth operator (Dong et al., 2020). The former controls when to conduct network growth, and the latter controls how to perform network growth. Here, our objectives are to better understand growth operators with a focus on Transformer models (Vaswani et al., 2017;Liu et al., 2020b), and specifically, to help design better progressive algorithms for BERT pre-training (Devlin et al., 2019). Specifically, we recognize the importance of using compound growth operators in our study, which balance different model dimensions (e.g., number of layers, the hidden size, and the input sequence length).
Regarding previous efforts made on Transformer growth, they mainly focus on one single model dimension: either the length (Devlin et al., 2019) or the depth (Gong et al., 2019). In this work, however, we find that compound effect plays a vital role in growing a model to different capacities, just like its importance in deciding network architectures under specific budgets (Tan and Le, 2019). Here, we show that growing a Transformer from both dimensions leads to better performance with less training cost, which verifies our intuition and shows the potential of using compound growth operators in progressive BERT training.
Further, we explore the potential choices of growth operators on each dimension. We conduct controlled experiments and comprehensive analyses to compare various available solutions. These analyses further guide the design of effective compound growth operators. Specifically, we observe that, on the length dimension, embedding pooling is more effective than directly truncating sentences. On the width dimension, parameter sharing outperforms low-rank approximation.
Guided by our analyses, we propose Compound-Grow by combining the most effective growth oper-

Progressive Compound Growth
Progressive Training. Algorithm 1 presents a generic setup for progressive training. In each training stage t, the corresponding growth operator g t grows the model f . Then, f is updated by the optimizer opt before entering the next training step. Correspondingly, our goal is to maximize the final model performance after all training stages, which can be formulated as minimizing the empirical loss L over dataset D:  (Tan and Le, 2019), which aims to find the optimal network architecture by maximizing the model accuracy for a given resource budget: where N (d, w, r) is a CNN network, d, w, r are coefficients to scale its depth, width, and resolution. In this work, we find that such a compound effect also plays a vital role in progressive BERT training. Intuitively, growing the network from more than one dimension creates larger potential to get better performance with less resource. Restricting the growth operator from handling all dimensions would lead to inferior performance, as The optimal value of the objective function (Equation 1) is bounded by the feasible set of the growth operator.
Empirical Verification. For empirical verification, we compare existing single-dimensional growth operators in model depth and length with the corresponding compound operator that balances both dimensions. For all three compared growth operators, their configurations are adjusted to make sure they have the same model after growth, and their low-cost models have empirically comparable training costs. As to the training, we first train the low-cost model for 100/300/500/700K steps, and then grow the model to a standard BERT-base model for another 300K steps training. For models trained with different steps/growth operators, we compare their performance after finetuning on MNLI, SQuaD v1.1, and SQuaD v2.0 respectively. As Figure 1 shows, across different settings (columns) and metrics (rows), the compound operator consistently outperforms or at least achieves comparable results with single-dimensional operators. The observation meets our intuition: to achieve same speedup, the compound method can distribute the reduction on training cost to different dimensions, and achieve better performance.

Explore Possible Growth Operators
After verifying the importance of compound growing, we conduct more analysis to provide guidance for growth operator design.

Length Dimension
Data Truncation first limits the maximum length of input sequences by truncating the training sentences to a shorter length, and then train the model on full-length data. Note that shorter input sequences usually come with less masked tokens to predict in each sentence. For instance, Devlin et al. (2019) first use sentences of at most 128 tokens (with 20 masked tokens) before training on data of 512 tokens (with 76 masked tokens). The major issue of this data truncation operator is the incomplete update of position embeddings. The model needs to learn embeddings for the extra positions from scratch at the last stage. Embedding Pooling. Inspired by the idea of multigrid training in the vision domain (Wu et al., 2020), we train the model with "low-resolution text" through embedding pooling over unmasked tokens. Compared with data truncation, this method leaves the training data intact and can update all position embeddings. Specifically, since the output length of self-attention modules is decided by the length of query vectors, we only conduct pooling on query vectors in the first self-attention layer and keep key/value vectors intact.
As shown in the first group of Table 1, data truncation (sequence length= 256) and mean pooling (k = 2) has similar performance on MNLI and SQuAD v1.1, while mean pooling outperforms data truncation on SQuAD v2.0.

Width Dimension
On the width dimension, we focus our study on the feedforward network module (FFN). Similar to gradually increasing the network depth, one can also gradually increase the network width for Transformer growth. Specifically, the FFN module can be formed as f (xW 1 )W 2 , where f (·) is the activation function, W 1 ∈ R D×H and W 2 ∈ R H×D are parameters, D and H are the embedding size and the hidden size respectively. Matrix Factorization. A straightforward method is to approach the original weight matrix W i ∈ R m×n by the product of two small matrices W i1 ∈ R m×h and W i2 ∈ R h×n in the early training stage. In the late stage of training, we would recover W i as W i1 × W i2 and unleash the full potential. Parameter Sharing. Instead of decomposing original weight matrices with low-rank approximation, we try to employ parameter sharing by spliting the matrix into multiple blocks and sharing parameters across different blocks. Formally, for input x, (2) Specifically, in the early training stage, we replace W 1 and W 2 with smaller matrices W 1 ∈ R D× H k and W 2 ∈ R H k ×D . Then, at the growth step, we vertically duplicate (share) W 1 for k times along the dimension with size H/k as the new W 1 . W 2 is generated similarly. Similar to matrix factorization, this setting also preserves the output after the growth. Random noise is added to W 1 and W 2 by the dropout layers in FFN, so that the shared small matrices will have different outputs and gradients in later training steps (Chen et al., 2015).
As the second group of Table 1 shows, parameter sharing has significant superiority over matrix fac- torization with comparable budgets (k=4 for parameter sharing and h=0.2D for matrix factorization).

Depth Dimension
Transformer growth in the depth dimension has been thoroughly discussed in literature (Gong et al., 2019;. Our observation in this dimension is consistent with their conclusions. In experiments we also compare compound growth with the standard progressive stacking method.
Discussion. From the perspective of implementation, compound growth introduces little additional engineering effort compared with progressive stacking. Specifically, the growth step of progressive stacking basically copies the parameters of the small model to corresponding layers of the full model. The growth on the width dimension is a similar parameter copying process for the fully connected layers, while the growth on the length dimension removes the embedding pooling layer without changing any model parameters.

Experiment
Experiment Setups. We train the original BERT models following the same settings in (Devlin et al., 2019) with 256 batch size and 512-token data. All compared models will finally grow to the original model, and keep the total number of training steps to 1M. We evaluate the final model on the GLUE benchmark (Wang et al., 2018) including 9 subtasks, and the two versions of SQuAD (Rajpurkar et al., 2018) Table 3 shows the test performance on the GLUE benchmark. Both compared methods achieve at least the same performance as the original BERT model. While CompoundGrow saves more training time, it achieves the same performance with stacking on the large model. On the base model, stacking is better in terms of average GLUE score, mainly  Table 3: The test performance on the GLUE benchmark with metrics described in the original paper (Wang et al., 2018), the higher the better. Compound stands for the proposed method with speedup shown in Table 2. due to its advantage on the CoLA dataset. Such an unusual gap on CoLA might be caused by its relatively small volume and corresponding random variance (Dodge et al., 2020). On the larger and more robust MNLI dataset, the compared methods achieve almost the same score.

Related Work
Progressive training was originally proposed to improve training stability, which starts from an efficient and small model and gradually increase the model capacity (Simonyan and Zisserman, 2014). Recent study leverages this paradigm to accelerate model training. For example, multi-level residual network (Chang et al., 2018) explores the possibility of augmenting network depth in a dynamic system of view and transforms each layer into two subsequent layers. AutoGrow (Wen et al., 2020) attempts to automate the discover of proper depth to achieve near-optimal performance on different datasets. LipGrow (Dong et al., 2020) proposes a learning algorithm with an automatic growing scheduler for convolution nets. At the same time, many studies have been conducted on the model growing operators. Network Morphism (Wei et al., 2016(Wei et al., , 2017 manages to grow a layer to multiple layers with the represented function intact. Net2net (Chen et al., 2015) is a successful application to transfer knowledge to a wider network with function-preserving initializa-tion. Similar ideas can be discovered in many network architectures, including progressive growing of GAN (Karras et al., 2017) and Adaptive Computation Time (Graves, 2016;Jernite et al., 2016). As large-scale pre-training keeps advancing the state-of-the-art (Devlin et al., 2019;Radford, 2018), their overwhelming computational consumption becomes the major burden towards further developing more powerful models (Brown et al., 2020). Preliminary application of progressive training has been made on Transformer pre-training. (Devlin et al., 2019) designs two-stage training with a reduced sequence length for the first 90% of updates. (Gong et al., 2019) stack shallow model trained weights to initialize a deeper model, which grows the BERTbase model on the depth dimension and achieves 25% shorter training time.

Conclusion
In this work we empirically verify the importance of balancing different dimensions in Transformer growth and propose compound growth operators, which integrates operators for more than one dimension. Moreover, we conduct controlled experiments on various design choices of growth operators, which provides a practical guidance to algorithm design. Our final model speeds up the training of the BERT-base and BERT-large models by 73.6% and 82.2% in walltime respectively while achieving comparable performance. The three data points in each curve is generated with 300K/500K/700K low-cost training steps, respectively.

A Experiment Details
All our models are implemented based on the Ten-sorFlow implementation 3 of BERT  and trained on TPU v3 with 64 chips. We keep the original WordPieceTokenizer and original position embeddings (instead of relative position encoding used in (Dai et al., 2020)). Following (Devlin et al., 2019), we use the English Wikipedia corpus and the BookCorpus for pre-training. For each finetuning task, we search hyperparameters from following candidates: batch size=16/32/64, learning rate=3e-4/1e-4/5e-5/3e-5.
Optimization. The original BERT models use the AdamW (Loshchilov and Hutter, 2019) optimizer with learning rate decay from 0.0001 to 0 and 10K steps of warmup (Liu et al., 2020a). At the start of each progressive training stage, the learning rate is reset to 0.0001 and keeps decaying as the original schedule. Baseline Implementation. We apply the compared stacking method (Gong et al., 2019) on the official BERT model with the same training setting, learning rate schedule and hardware as our method, and achieves better performance than the reported numbers in the original paper. To further unleash the potential of the compared method, we adjust their original training schedule to 300K steps with 1 ⁄4 number of layers, 400K steps with 1 ⁄2 number of layers, and 300K steps with the full model. The new training schedule is much faster than the reported one (speedup from the reported +25% to +64.9%) and still gives better final performance than the original paper. This is the fastest stacking model we can get without performance drop.

B Further Comparison Between
CompoundGrow and Stacking To have a deeper understanding of the compared methods, we study their speed-performance tradeoff by adjusting the training schedule. Specifically, each time we reduce 200K low-cost training steps for both models, and compare their validation F1 score on SQuaDv2.0. As Figure 2 shows, Com-poundGrow has clear performance advantage when given comparable training budgets, which further verifies our hypothesis.