Hyperparameter-free Continuous Learning for Domain Classification in Natural Language Understanding

Domain classification is the fundamental task in natural language understanding (NLU), which often requires fast accommodation to new emerging domains. This constraint makes it impossible to retrain all previous domains, even if they are accessible to the new model. Most existing continual learning approaches suffer from low accuracy and performance fluctuation, especially when the distributions of old and new data are significantly different. In fact, the key real-world problem is not the absence of old data, but the inefficiency to retrain the model with the whole old dataset. Is it potential to utilize some old data to yield high accuracy and maintain stable performance, while at the same time, without introducing extra hyperparameters? In this paper, we proposed a hyperparameter-free continual learning model for text data that can stably produce high performance under various environments. Specifically, we utilize Fisher information to select exemplars that can “record” key information of the original model. Also, a novel scheme called dynamical weight consolidation is proposed to enable hyperparameter-free learning during the retrain process. Extensive experiments demonstrate baselines provide fluctuated performance which makes them useless in practice. On the contrary, our proposed model significantly and consistently outperforms the best state-of-the-art method by up to 20% in average accuracy, and each of its component contributes effectively to overall performance.


Introduction
Catastrophic forgetting is the well-known Achilles' heel of deep neural networks, that the knowledge learned from previous tasks will be forgotten when the networks are retrained to adapt to new tasks. Although this phenomenon has been noticed as early as the birth of neural networks (French, 1999;Mc-Closkey and Cohen, 1989), it didn't attract much attention until deep neural networks have achieved impressive performance gains in various applications (LeCun et al., 2015;Krizhevsky et al., 2012). Domain classification is the task that mapping the spoken utterances to natural language understanding domains. It is widely used in intelligent personal digital assistant (IPDA) devices, such as Amazon Alexa, Google Assistant, and Microsoft Cortana. As many IPDAs now allow third-party developers to build and integrate new domains (Kumar et al., 2017), these devices are eager to continual learning technologies that can achieve high performance stably (Kim et al., 2018a,b). However, most traditional IPADs only work with wellseparated domains built by specialists (Tur and De Mori, 2011) or customized designed for specific datasets (Li et al., 2019).
There is still a lack of continual learning methods that capable of general domain classification. Most previous approaches capable of continual learning focus on the scenario that the new model should be retrained without any access to old data (Li and Hoiem, 2016;Kirkpatrick et al., 2017;Lopez-Paz and Ranzato, 2017). However, these methods often involve more parameters that require extensive efforts in expert tuning. And when data distributions of new tasks are obviously different from the original task (e.g. class-incremental learning), these approaches can hardly maintain good accuracy for both tasks and may suffer from fluctuations in performance. On the other hand, old data are not unavailable in many practical cases. The main concerns arise from the tremendous cost in memory and computation resources, if the model is updated with huge previous datasets. Also, most continual learning approaches are applied to image data that little attention has been paid to text data. Is it possible to develop a desirable model capable of continual learning that satisfies the following qualities? 1) High accuracy with limited old data. Compared to the extreme cases that no access or full access to old data, it is more practical to put models under the setting with a limited amount of old data available (e.g., no more than 10% of original data). In this case, models can achieve good performance without too much additional cost in physical resources and can be conveniently renewed with periodical system updates. 2) High stability with zero extra parameters. Many continual learning models can perform well only under restricted experiment settings, such as specific datasets or carefully chosen parameters. However, practical datasets are usually noisy and imbalanced distributed, and inexperienced users can't set suitable parameters in real-world applications. Therefore, it is desirable to develop a hyperparameter-free model that can work stably under various experimental environments.
To achieve these goals, we proposed a Continous learning model based on weight Consolidation and Fisher Information sampling (CCFI), with application to domain classification. The main challenge is how to "remember" information from original tasks, not only the representative features from data, but also the learned parameters of the model itself. This is a non-trivial contribution since "uncontrollable changes" will happen to neural network parameters whenever the feature changes. To avoid such "uncontrollable changes", previous work iCarL even discards deep neural networks as its final classifier and turns to k-nearest neighbors algorithm for actual prediction (Rebuffi et al., 2017). Our work demonstrates that these changes are "controllable" with exemplar selected by Fisher information and parameters learned by Dynamical Weight Consolidation. Our contributions can be summarized as follows.
• Fisher information sampling. Good exemplars are required to "remember" key information of old tasks. Unlike previous work using simple mean vectors to remember the information of old data, exemplars selected by Fisher information record both the features of data and the information of the original neural network. • Dynamical weight consolidation. The need for hyperparameter is an inherited problem of regularization-based continual learning. Previous works search for this hyperparameter by evaluating the whole task sequence, which is supposed not to be known. This work provides a simple auto-adjusted weighting mechanism, making the regularization strategy possible for a practical application. Also, traditional weight consolida-tion methods such as EWC (Kirkpatrick et al., 2017) are designed for sequential tasks with similar distributions. We extend it to the incremental learning scenario and add more regularity to achieve better stability. • Extensive experimental validation. Most existing continual learning methods are designed for image data, while few previous attempts working on text data are often limited to specific usage scenarios and rely on fine-tuned parameters. Our proposed CCFI model is a general framework that can be efficiently applied to various environments with the least efforts in parameter tuning. Our extensive experimental results demonstrate the proposed CCFI can outperform all state-ofthe-art methods, and provide insights into the working mechanism of methods.

Related Work
Although most of the existing approaches are not directly applicable to our problem, several main branches of research related to this work can be found: exemplar selection, regularization of weights, and feature extraction or fine-tune method based on pre-trained models. Our problem is closest to the setting of exemplar selection methods (Rebuffi et al., 2017;Li et al., 2019). These approaches store examples from original tasks, and then combine them together with the new domain data for retraining. iCarL (Rebuffi et al., 2017) discards the classifier based on neural network to prevent the catastrophic forgetting, and turns to traditional K-nearest neighbor classifier.
To avoid large changes of important parameters, regularization models (Kirkpatrick et al., 2017;Li and Hoiem, 2016;Zenke et al., 2017) add constraints to the loss function. They usually introduce extra parameters requiring careful initialization. And it has been shown that their performance will drop significantly if the new tasks are drawn from different distributions (Rebuffi et al., 2017). On the contrary, our proposed CCFI is a parameterfree model that can produce stable performance under various experimental environments.
Feature extraction methods utilize pre-trained neural networks to calculate features of input data Sharif Razavian et al., 2014). They make little modifications to the original network but often result in a limited capacity for learning new tasks. Fine-tuning models (Girshick et al., 2014) can modify all the parameters in order to achieve better performance in new tasks. Although starting with a small learning rate can indirectly preserve the knowledge learned from the original task, fine-tuning method will eventually tend to new tasks. Adapter tuning (Houlsby et al., 2019) can be viewed as the hybrid of fine-tune and feature extraction. Unlike our model that makes no changes to the backbone model, the Adapter tuning method increases the original model size and introduces more parameters by inserting adapter modules to each layer.

Our CCFI Model
, the classification tasks in deep learning neural networks are equal to find the best parameter set ⇥ that can maximize the probability of the data p(D|⇥). Namely, the classifier can make predictionsŶ that best reproduce the ground truth labels Y . Under the continual learning setting, new data D n of additional classes will be added to the original data stream D o in an incremental form. Our goal is to update the old parameters ⇥ o (trained on original data stream D o ) to the new parameters ⇥ n , which can work well on both new data D n and old data D o .
In this paper, the initial model is trained with the original data set D o , and will output the trained model ⇥ o . In this training process, Fisher Information Sampling is conducted to select the most representative examples that can help to "remember" the parameters of the initial trained model. In the retraining process, the renewed model is learned based on Dynamical Weight Consolidation, and evaluated on the training set consisted of new classes and the old exemplars.

Fisher Information Sampling
The critical problem of exemplar set selection is: what are good examples that can "maintain" the performance of old tasks? The state-of-the-art method iCaRL (Rebuffi et al., 2017) selects examples close to mean feature representation, and CoNDA (Li et al., 2019) follows the same scheme to domain adaptation on text data. To utilize the advantage of the mean feature and avoid catastrophic forgetting, iCaRL chooses k-nearest neighbors algorithm as the classifier rather than deep neural networks, although the latter is proved to be a much better performer. Is there any exemplar selection method that can utilize the powerful deep learning models as the classifier, and at the same time, "remember" the key information of old tasks?
Fisher information measures how much information that an observable random variable carries about the parameter. For a parameter ✓ in the network ⇥ trained by data D, its Fisher information is defined as the variance of the gradient of the log-likelihood: Fisher information can be calculated directly, if the exact form of log p(D|✓) is known. However, the likelihood is often intractable. Instead, empirical Fisher informationÎ(✓) is computed through data d i 2 D drawn from p(D|✓): (2) From another point of view, when log p(D|✓) is twice differentiable with respect to ✓, Fisher information can be written as the second derivative of likelihood: According to Equation 3, three equivalent indications can be made to a high value in Fisher information I(✓): • a sharp peak of likelihood with respect to ✓, • ✓ can be easily inferred from data D, • data D can provide sufficient information about the correct value for parameter ✓. Jointly thinking about the calculation form introduced by empirical Fisher information (Equation 2) and the physical meaning of Fisher information revealed by the second derivative form (Equation 3), we can find a way to measure how much information each data d i carries to the estimation of parameter ✓, which we call as empirical Fisher information difference: (4) Instead of simple mean feature vectors used in previous work (Rebuffi et al., 2017;Li et al., 2019), we use the empirical Fisher information difference to select exemplar set. Specifically, CCFI model makes use of BERT (Devlin et al., 2019) for text classification. The base BERT model is treated as feature extractor : X ! H, which takes input token sequences X, and outputs the hidden representations H. To predict the true label Y , a softmax classifier is added to the top of BERT: where W is the task-specific parameter matrix for classification. The trained parameters ⇥ can therefore be split into the fixed feature extraction part and variable weight parameter W . In continual learning setting, we denote W k 2 R h⇥k as the most up-to-date weight matrix, where k is the number of classes that have been so far observed, and h is the size of the final hidden state H.
Remember that, for the classification task, the best parameters that can maximize the probability of the data p(D|⇥) are also the parameters that make predictionsŶ closest to the ground truth label Y . Therefore, we can take Equation 5 into Equation 4, in order to get the practical computation of empirical Fisher information difference for data d i on parameter ✓. Since the parameters of feature extractor are fixed, only empirical Fisher information difference of parameters in weight matrix w j 2 W are calculated: where the likelihood p(d i |✓) is calculated via the log-probability value of the correct label y i of input x i . And the total empirical Fisher information difference data d i carrying is the sum over all w j 2 W : Algorithm 1 describes the exemplar selecting process. Within each class k, the samples top ranked by empirical Fisher information difference are selected as exemplars, till the targeted sample rate (e.g., 1%) is met.

Dynamical Weight Consolidation
The main goal of retraining process is: how to achieve good performance for both new and old tasks? EWC (Kirkpatrick et al., 2017) is proved to be a good performer that can balance the performance of old and new task. However, rather than incremental learning problem studied in this paper, EWC is designed for the tasks with same class number but different in data distributions. Furthermore, EWC requires careful hyperparameter setting, which is unrealistic to be conducted by inexperienced users. In this section, we introduce a scheme named Dynamical Weight Consolidation, which can avoid the requirement of such hyperparameter setting. Also, this scheme is demonstrated to perform more stably than traditional EWC in the experimental part. Specifically, our loss function during the retraining process can be viewed the sum of two parts: loss`n calculated by the correct class indicators of new data and loss`e to reproduce the performance of old model: The loss function`e can be further divided into two parts: the cross entropy of exemplar set, and the consolidation loss caused by modifying parameters with high Fisher information. In traditional EWC model, the weight that balances cross entropy and consolidation loss is a fixed value. In our CCFI model, is updated dynamically based on current values of cross entropy and consolidation loss: .
Note that, theÎ(w j ) is the element in the updated parameter information table T n . The details can be found in the section 3.3.2.

Overall Process
This part describes the overall process of the CCFI model. First we list the outputs of the old tasks, then we introduce the detailed implements of retraining.

Initial training
After training the model with old data (k o classes), the outputs of the old task include:1) trained model ⇥ o ; 2) exemplars E of old data, and 3) parameter information table T o . Each element in the parameter information table T o is the empirical Fisher informationÎ(w o j ) of the old task, which can be computed through Equation 2 during the initial training process.

Retraining
The retraining process can be described as follows:

Experiment
In this section, the CCFI model is first compared with the state-of-the-art methods under a continual setting. And further evaluations are conducted to examine the effectiveness of the individual components within CCFI model.

Experiment Settings
Datasets. We evaluated our proposed CCFI and comparison methods on public available 150-class dataset (Larson et al., 2019) and real-world (even product) 73-domain dataset The details of datasets can be found in Appendix A.1.

Baselines. iCaRL (Rebuffi et al., 2017) and
CoNDA (Li et al., 2019), are the closest continual learning approaches to CCFI, which are designed for the scenario with access to old data. We also add fine-tune and the fixed feature method as baselines.
To make fair comparisons, CCFI and all the baselines use the same BERT backbone (Devlin et al., 2019), and observe the same amount of old data in all learning tasks. The implementation details can be found in Appendix A.2.
In the main body of experiments, we report the results with the framework consisted of BERT backbone and one-layer linear classifier. We also conducted experiments with a multiple-layer classifier, which can be found in Appendix A.3.

Quantitative Evaluation
Two key factors play in the performance of continual learning: 1) the number of new classes for retraining, and 2) the amount of old observable data. In this section, we first validate our model through a class-incremental learning task, by keep-  ing the amount of old observable data fixed and changing the number of new classes. We also study the effects of different exemplars by keeping the number of new classes unchanged but varying old observable data. Class-incremental Learning. In this part, we conduct the class-incremental learning evaluation on both 150-class and 73-domain dataset. Classincremental learning can be viewed as the benchmark protocol for continual learning with access to old data (Rebuffi et al., 2017;Li et al., 2019). Specifically, after the initial training, new domains will be added in random order. After adding each batch of new data, the results are evaluated on the current data set, considering all classes have been trained so far. Figure 1 and Figure 2 show the performance of class-incremental learning on 73-domain dataset and 150-class dataset. CCFI outperforms other methods in all tasks on both datasets. Specifically, several observations can be made as follows.
• Overall performance. Under the same new class number, CCFI always achieves the best overall accuracy among all methods. And this performance gap is enlarged with more new classes added for retraining. • Performance fluctuations. Fine-tune method is unstable in performance. It is the second performer on the 73-domain dataset. However, it quickly drops to almost zero and displays fluctuations on the 150-class dataset, even if the experiment conducted on the 150-class dataset is set with a higher observable ratio of old data. • Accuracy stage. Both the fixed feature method and CoNDA display the pattern of "performance stage" with more new classes added, and CoNDA enjoys a "larger" stage than the fixed feature method. For example, as shown in Figure  1a, CoNDA maintains stable performance with 5 to 12 newly added classes varying and then suddenly drops. • Predictable performance. Both CCFI and iCaRL display linear patterns in overall performance. It means the possibility to predict and estimate class-incremental learning performance, which is a preferable feature in many applications. But iCaRL starts at a lower accuracy and drops much faster than CCFI, probably because it discards the neural network and tunes to the simple k-nearest neighbors algorithm as the final classifier. This phenomenon also confirms that CCFI can enjoy the excellent performance of neural network classifiers and overcome its drawback of catastrophic forgetting.

Different Exemplar Size
To provide insight into the working mechanism of models capable of continual learning, we conduct experiments by varying the exemplar set's size with the number of new classes fixed. Figure 3 shows the model performance under the effect of different exemplar sizes by changing the observable ratios of old data.
• Overall pattern. CCFI continues to beat baselines with obvious advantages in performance. Especially, CCFI can achieve high accuracy with a minimal amount (e.g.,1%) of old data, although all methods can obtain performance improvement by increasing the ratio of old observable data. A dramatic performance gain can also be observed from all models when the observable ratio of old data increases from zero to non-zero values. This phenomenon further confirms that our experimental setting with limited access to old data is practically useful. • Consistent improvement. CCFI, CoNDA, and iCarL obtain consistent improvements when increasing the ratio of old observable data. How-  ever, the fixed feature method doesn't get apparent benefits with more old data. This phenomenon indicates more observations of old data are not the guarantee for better performance. And it further confirms the necessity of developing continual learning methods that can effectively utilize the information learned from exemplars.

Ablation Study
Our proposed CCFI outperforms all the state-ofthe-art methods. To provide further insights into its working mechanism, additional experiments are conducted to examine individual aspects of CCFI.

Dataset and Experimental Setting
In order to avoid the occasionalities introduced by data and model complexity, components are examined on a synthetic data set by simple neural networks with fixed weight initialization. Specifically, we generate a synthetic dataset of 10 completely separable classes, and each class includes 1,000 examples. As the setting for continual learning, we use six classes for initial training, and four classes as additional new classes for retraining. The neural network used in this section is a simple network with two fully-connected layers. The first layer is served as a feature extractor, which is fixed after the initial training. The second layer is used as a classifier that can be tuned during retrain. To ensure other parts won't affect the component to be validated, the neural networks are initialized with the same weight matrix generated by a fixed random seed. With these settings, the results can best reflect the true contribution of components.

Dynamical Weight Consolidation
First, we analyze the effectiveness of the dynamical weight consolidation component. Figure 4 plots the consolidation loss (second part in Equation 8b) of model using traditional fixed weight and our proposed dynamical weight consolidation. Several observations can be made as follows.
• Fixed weight with big value. When the weight ( in Equation 8b) is set by a big value (e.g., 10 25 in Figure 4a), the consolidation loss is hard to converge and suffers from fluctuations. • Fixed weight with small value. Oppositely, if the weight is initialized with a relatively small value (e.g., 10 2 in Figure 4b), the consolidation loss is too small to be effective. In fact, as can be observed from Figure 4b, under the small weight setting, the consolidation loss even experiences an increase first before it slowly decreases. The increase in consolidation indicates that the neural network tends to sacrifice consolidation loss to lower the overall loss. Furthermore, this phenomenon happens when the new model modifies the important parameters learned by the original model, which are supposed to be kept with the least changes for the continual learning purpose. • Dynamical weight consolidation. In contrast to the unstable performance of the traditional method, as shown in Figure 4c, consolidation loss converges fast and stable by using our proposed dynamical weight consolidation.

Fisher Information Sampling
The second set of experiments validate whether Fisher information sampling is indeed beneficial to the overall performance, compared with using (c) Sampling rate 2% (d) Sampling rate 5% Figure 5: Accuracy of the old task during retraining. Blue lines denote the accuracy using exemplars randomly selected, while red lines reflect the performance of exemplars generated by Fisher sampling.
randomly selected examples.
To examine how much improvement can be obtained by Fisher sampling alone, we remove the weight consolidation component in this section. Thus the results reported here are outputs of the simple two-layer model by using exemplars during retraining. From another point of view, these results show the amount of information the exemplars carrying from the original model. Figure 5 plots the accuracy of the old task during the retraining process. Although the network is retrained with only a small set of old data, the accuracy is computed over all old data to fully examine the quality of exemplars. Since the classes in synthetic data are fully separable, the accuracy will be 100% eventually. Therefore, the quality of exemplars is demonstrated by the converging speed. A faster converge speed provided by an exemplar set is of great significance in three aspects: • Better computational efficiency. With the same amount of old data for retraining, the most obvious benefit indicated by the faster converge speed is, the better computational efficiency since the model requires less retraining time. • Less "damage" to original model. A faster converge speed indicates less "damage" to the original model. All weight consolidation schemes act like "buffers" for the old parameter. With these schemes, old parameters will slow down their changes when new tasks come. To best cope with the consolidation schemes, good exemplars should achieve comparable good performance with fewer retraining epochs, since more retraining epochs mean that the new model has modified more parameters from the original network. • More information of original dataset. As mentioned above, under the synthetic data is fully separable, the accuracy will be 100% eventually.
In this case, a faster speed can be "converted" to more information, as experiments with more data always require fewer epochs to reach the states of convergence. For example, as shown in Figure 5, much more epochs are needed under sampling rate 0.5% than that of 1%. Figure 5 shows, exemplars generated by Fisher sampling can achieve much faster converge speed than randomly selected exemplars, which proves Fisher sampling alone can contribute contribution effectively to the overall performance.

Conclusion
This paper proposes a hyperparameter-free model called CCFI for continuous domain classification. CCFI can record information of old models via exemplars selected by Fisher information sampling, and conduct efficient retraining through dynamical weight consolidation. The comparison against the existing models reveals CCFI is the best performer under various experimental environments, without additional efforts in hyperparameter searching.  A.2 Implement Details Specific settings. In our implement of CoNDA, we pick up hyperprameter pos = 0.5 and neg = 0.3. The fixed-feature method freezes 12 layers of BERT after the initial training. Only the parameters in the new classifier layer are allowed for tuning, which in a way provides the function of continual learning. Fine-tune method can modify parameters in all 12 layers of BERT, which can be viewed as the network with little prevention of catastrophic forgetting. General settings. Adam optimizer is used in all learning processes, and the learning rate is always set to be 0.00001. All runs are trained on 4 V100 GPUs with a batch size of 32. Our exam-1 https://github.com/clinc/oos-eval ple code can be found at: https://github.com/ tinghua-code/CCFI

A.3 Multi-layer Classifier Results
To examine the effect of classifier layer number (amount of retrainable parameters), we run experiments under two frameworks. The first framework is the same as the one used in the main experimental part, which consists of a 12-layer BERT feature extractor and a one-layer linear classifier. The second framework keeps the BERT feature extractor unchanged and adds one more layer to the classifier. The results are listed in Table 1 and 2, and several observations can be made as follows.
• CCFI still remains the best performer. Our proposed CCFI model produces good performance regardless of the number of layers in the classifier. This phenomenon further confirms its effectiveness and stability. • CoNDA is the second-best performer in both frameworks. Notably, the retraining performance of CoNDA increases when we increase the number of layers. • BERT finetune and feature extraction method become worse when increasing the number of layers. These two baselines are sensitive to the structure of the classifier, which indicates the superficial variations of pre-trained models are not enough for continual learning. • One-layer classifier works well enough with BERT. As can be seen from Table 1 and 2, the initial training results of all methods degrade when increasing the number of classifier layers. Therefore, we report the results based on a one-layer linear classifier in the main body of the paper.