A Concise Model for Multi-Criteria Chinese Word Segmentation with Transformer Encoder

Multi-criteria Chinese word segmentation (MCCWS) aims to exploit the relations among the multiple heterogeneous segmentation criteria and further improve the performance of each single criterion. Previous work usually regards MCCWS as different tasks, which are learned together under the multi-task learning framework. In this paper, we propose a concise but effective unified model for MCCWS, which is fully-shared for all the criteria. By leveraging the powerful ability of the Transformer encoder, the proposed unified model can segment Chinese text according to a unique criterion-token indicating the output criterion. Besides, the proposed unified model can segment both simplified and traditional Chinese and has an excellent transfer capability. Experiments on eight datasets with different criteria show that our model outperforms our single-criterion baseline model and other multi-criteria models. Source codes of this paper are available on Github.


Introduction
Chinese word segmentation (CWS) is a preliminary step to process Chinese text. The mainstream CWS methods regard CWS as a character-based sequence labeling problem, in which each character is assigned a label to indicate its boundary information. Recently, various neural models have been explored to reduce efforts of the feature engineering (Chen et al., 2015a,b;Qun et al., 2020;Wang and Xu, 2017;Kurita et al., 2017;Ma et al., 2018).
Recently,  proposed multicriteria Chinese word segmentation (MCCWS) to effectively utilize the heterogeneous resources with different segmentation criteria. Specifically, they regard each segmentation criterion as a single  task under the framework of multi-task learning, where a shared layer is used to extract the criteriainvariant features, and a private layer is used to extract the criteria-specific features. However, it is unnecessary to use a specific private layer for each criterion. These different criteria often have partial overlaps. For the example in Table 1, the segmentation of "林丹(Lin Dan)" is the same in CTB and MSRA criteria, and the segmentation of "总|冠军(the championship)" is the same in PKU and MSRA criteria. All these three criteria have the same segmentation for the word "赢 得(won)". Although these criteria are inconsistent, they share some partial segmentation. Therefore, it is interesting to use a unified model for all the criteria. At the inference phase, a criterion-token is taken as input to indicate the predict segmentation criterion. Following this idea, Gong et al. (2018) used multiple LSTMs and a criterion switcher at every position to automatically switch the routing among these LSTMs. He et al. (2019) used a shared BiLSTM to deal with all the criteria by adding two artificial tokens at the beginning and end of an input sentence to specify the target criterion. However, due to the long-range dependency problem, BiL-STM is hard to carry the criterion information to each character in a long sentence.
In this work, we propose a concise unified model for MCCWS task by integrating shared knowledge from multiple segmentation criteria. Inspired by the success of the Transformer (Vaswani et al., 2017), we design a fully shared architecture for MCCWS, where a shared Transformer encoder is  used to extract the criteria-aware contextual features, and a shared decoder is used to predict the criteria-specific labels. An artificial token is added at the beginning of the input sentence to determine the output criterion. The similar idea is also used in the field of machine translation, Johnson et al. (2017) used a single model to translate between multiple languages. Figure 1 illustrates our model. There are two reasons to use the Transformer encoder for MCCWS. The primary reason is its neatness and ingenious simplicity to model the criterion-aware context representation for each character. Since the Transformer encoder uses selfattention mechanism to capture the interaction each two tokens in a sentence, each character can immediately perceive the information of the criteriontoken as well as the context information. The secondary reason is that the Transformer encoder has potential advantages in capturing the long-range context information and having a better parallel efficiency than the popular LSTM-based encoders. Finally, we exploit the eight segmentation criteria on the five simplified Chinese and three traditional Chinese corpora. Experiments show that the proposed model is effective in improving the performance of MCCWS. The contributions of this paper could be summarized as follows.
• We proposed a concise unified model for MC-CWS based on Transformer encoder, which adopts a single fully-shared model to segment sentences with a given target criterion. It is attractive in practice to use a single model to produce multiple outputs with different criteria.
• By a thorough investigation, we show the feasibility of using a unified CWS model to segment both simplified and traditional Chinese (see Sec. 4.3). We think it is a promising direction for CWS to exploit the collective knowledge of these two kinds of Chinese. • The learned criterion embeddings reflect the relations between different criteria, which make our model have better transfer capability to a new criterion (see Sec. 4.4) just by finding a new criterion embedding in the latent semantic space. • It is a first attempt to train the Transformer encoder from scratch for CWS task. Although we mainly address its conciseness and suitability for MCCWS in this paper and do not intend to optimize a specific Transformer encoder for the single-criterion CWS (SCCWS), we prove that the Transformer encoder is also valid for SCCWS. The potential advantages of the Transformer encoder are that it can effectively extract the long-range interactions among characters and has a better parallel ability than LSTM-based encoders.

Background
In this section, we first briefly describe the background knowledge of our work.

Neural Architecture for CWS
Usually, CWS task could be viewed as a characterbased sequence labeling problem. Specifically, each character in a sentence X = {x 1 , . . . , x T } is labelled as one of y ∈ L = {B, M, E, S}, indicating the begin, middle, end of a word, or a word with single character. The aim of CWS task is to figure out the ground truth of labels Y * = {y * 1 , . . . , y * T }: Recently, various neural models have been widely used in CWS and can effectively reduce the efforts of feature engineering. The modern architecture of neural CWS usually consists of three components: Embedding Layer: In neural models, the first step is to map discrete language symbols into distributed embedding space. Formally, each character x t is mapped as e xt ∈ R de , where d e is a hyper-parameter indicating the size of character embedding. Encoding Layer: The encoding layer is to extract the contextual features for each character.
For example, a prevalent choice for the encoding layer is the bi-directional LSTM (BiLSTM) (Hochreiter and Schmidhuber, 1997), which could incorporate information from both sides of sequence.
where − → h t and ← − h t are the hidden states at step t of the forward and backward LSTMs respectively, θ e denotes all the parameters in the BiLSTM layer.
Besides BiLSTM, CNN is also alternatively used to extract features.
Decoding Layer: The extracted features are then sent to conditional random fields (CRF) (Lafferty et al., 2001) layer or multi-layer perceptron (MLP) for tag inference.
When using CRF as decoding layer, p(Y |X) in Eq (1) could be formalized as: where Ψ(Y |X) is the potential function. In first order linear chain CRF, we have: ψ(x, t, y , y) = exp(δ(X, t) y + b y y ), where b y y ∈ R is trainable parameters respective to label pair (y , y), score function δ(X, t) ∈ R |L| calculates scores of each label for tagging the t-th character: where h t is the hidden state of encoder at step t, W δ ∈ R d h ×|L| and b δ ∈ R |L| are trainable parameters.
When using MLP as decoding layer, p(Y |X) in Eq (1) is directly predicted by a MLP with softmax function as output layer.
where θ d denotes all the parameters in MLP layer.

MCCWS with Multi-Task Learning
To improve the performance of CWS by exploiting multiple heterogeneous criteria corpora,  utilize the multi-task learning framework to model the shared information among these different criteria.
Formally, assuming that there are M corpora with heterogeneous segmentation criteria, we refer D m as corpus m with N m samples: where X (m) n and Y (m) n denote the n-th sentence and the corresponding label in corpus m respectively.
The encoding layer introduces a shared encoder to mine the common knowledge across multiple corpora, together with the original private encoder. The architecture of MTL-based MCCWS is shown in Figure 2b.
Concretely, for corpus m, a shared encoder and a private encoder are first used to extract the criterionagnostic and criterion-specific features.
where e X = {e x 1 , · · · , e x T } denotes the embeddings of the input characters x 1 , · · · , x T , enc s (·) represents the shared encoder and enc m (·) represents the private encoder for corpus m; θ are the shared and private parameters respectively. The shared and private encoders are usually implemented by the RNN or CNN network.
Then a private decoder is used to predict criterion-specific labels. For the m-th corpus, the probability of output labels is (11) where dec m (·) is a private CRF or MLP decoder for corpus m(m ∈ [1, M ]), taking the shared and private features as inputs; θ (m) d is the parameters of the m-th private decoder.
Objective The objective is to maximize the log likelihood of true labels on all the corpora: where e } denote all the private and shared parameters respectively; E is the embedding matrix.

Proposed Unified Model
In this work, we propose a more concise architecture for MCCWS, which adopts the Transformer encoder (Vaswani et al., 2017) to extract the contextual features for each input character. In our proposed architecture, both the encoder and decoder are shared by all the criteria. The only difference for each criterion is that a unique token is taken as input to specify the target criterion, which makes the shared encoder to capture the criterion-aware representation. Figure 2 illustrates the difference between our proposed model and the previous models. A more detailed architecture for MCCWS is shown in Figure 3.

Embedding Layer
Given a sentence X = {x 1 , . . . , x T }, we first map it into a vector sequence where each token is a d model dimensional vector. Besides the standard character embedding, we introduce three extra embeddings: criterion embedding, bigram embedding, and position embedding.
1) Criterion Embedding: Firstly, we add a unique criterion-token at the beginning of X to indicate the output criterion. For the m-th criterion, the criterion-token is [m]. We use e [m] to denote its embedding. Thus, the model can learn the relations m x between different criteria in the latent embedding space.
2) Bigram Embedding: Based on (Chen et al., 2015b;Shao et al., 2017;, the character-level bigram features can significantly benefit the task of CWS. Following their settings, we also introduce the bigram embedding to augment the character-level unigram embedding. The representation of character x t is where e denotes the d-dimensional embedding vector for the unigram and bigram, ⊕ is the concatenation operator, and FC is a fully connected layer to map the concatenated character embedding with the dimension 3d into the embedding e xt ∈ R d model . 3) Position Embedding: To capture the order information of a sequence, a position embedding P E is used for each position. The position embedding can be learnable parameters or predefined. In this work, we use the predefined position embedding following (Vaswani et al., 2017). For the t-th character in a sentence, its position embedding is defined by where i denotes the dimensional index of position embedding. Finally, the embedding matrix of the sequence X = {x 1 , · · · , x T } with criterion m is formulated as where H ∈ R (T +1)×d model , (T + 1) and d model represent the length and the dimension of the input vector sequence.

Encoding Layer
In sequence modeling, RNN and CNN often suffer from the long-term dependency problem and cannot effectively extract the non-local interactions in a sentence. Recently, the fully-connected self-attention architecture, such as Transformer (Vaswani et al., 2017), achieves great success in many NLP tasks.
In this work, we adopt the Transformer encoder as our encoding layer, in which several multi-head self-attention layers are used to extract the contextual feature for each character.
Given a sequence of vectors H ∈ R (T +1)×d model , a single-head self-attention projects H into three different matrices: the query matrix Q ∈ R (T +1)×d k , the key matrix K ∈ R (T +1)×d k and the value matrix V ∈ R (T +1)×dv , and uses scaled dot-product attention to get the output representation.
where the matrices W Q ∈ R d model ×d k , W K ∈ R d model ×d k , W V ∈ R d model ×dv are learnable parameters and softmax(·) is performed row-wise. The Transformer encoder consists of several stacked multi-head self-attention layers and fullyconnected layers. Assuming the input of the multihead self-attention layer is H, its outputH is calculated by where layer-norm(·) represents the layer normalization (Ba et al., 2016) . All the tasks with the different criteria use the same encoder. Nevertheless, with the different criterion-token [m], the encoder can effectively extract the criterion-aware representation for each character.

Decoding Layer
In the standard multi-task learning framework, each task has its private decoder to predict the taskspecific labels. Different from the previous work, we use a shared decoder for all the tasks since we have extracted the criterion-aware representation for each character. In this work, we use CRF as the decoder since it is slightly better than MLP (see Sec. 4.2).
With the fully-shared encoder and decoder, our model is more concise than the shared-private architectures Huang et al., 2019).

Experiments
Datasets We use eight CWS datasets from SIGHAN2005 (Emerson, 2005) and SIGHAN2008 (Jin and Chen, 2008). Among them, the AS, CITYU, and CKIP datasets are in traditional Chinese, while the MSRA, PKU, CTB, NCC, and SXU datasets are in simplified Chinese. Except where otherwise stated, we follow the setting of Gong et al., 2018), and translate the AS, CITYU and CKIP datasets into simplified Chinese. We do not balance the datasets and randomly pick 10% examples from the training set as the development set for all datasets. Similar to the previous work , we preprocess all the datasets by replacing the continuous Latin characters and digits with a unique token, and converting all digits, punctuation and Latin letters to half-width to deal with the full/half-width mismatch between training and test set.
We have checked the annotation schemes of different datasets, which are just partially shared and no two datasets have the same scheme. According to our statistic, the averaged overlap is about 20.5% for 3-gram and 4.4% for 5-gram. Table 2 gives the details of the eight datasets after preprocessing. For training and development sets, lines are split into shorter sentences or clauses by punctuations, in order to make a faster batch.
Pre-trained Embedding Based on on (Chen et al., 2015b;Shao et al., 2017;, n-gram features are of great benefit to Chinese word segmentation and POS tagging tasks. Thus we use unigram and bigram embeddings for our models. We first pre-train unigram and bigram embeddings on Chinese Wikipedia corpus by the method proposed in (Ling et al., 2015), which improves standard word2vec by incorporating token order information.
Hyper-parameters We use Adam optimizer (Kingma and Ba, 2014) with the same warmup strategy as (Vaswani et al., 2017). The development set is used for parameter tuning. All the models are trained for 100 epochs. Pre-trained embeddings are fixed for the first 80 epochs and then updated during the following epochs. After Table 2: Details of the eight datasets after preprocessing. "Word Types" represents the number of unique word. "Char Types" is the number of unique characters. "OOV Rate" is Out-Of-Vobulary rate.   Table 3 shows the detailed hyperparameters. Table 4 shows the experiment results of the proposed model on test sets of eight CWS datasets. We first compare our Transformer encoder with the previous models in the single-criterion scenario. The comparison is presented in the upper block of Table 4. Since Switch-LSTMs (Gong et al., 2018) is designed form MCCWS, it is just slight better than BiLSTM in single-criterion scenario. Compared to the LSTM-based encoders, the Transformer encoder brings a noticeable improvement compared to Gong et al., 2018), and gives a comparable performance to (Ma et al., 2018). In this work, we do not intend to prove the superiority of the Transformer encoder over LSTM-based encoders in the single-criterion scenario. Our purpose is to build a concise unified model based on Transformer encoder for MCCWS.

Overall Results
In the multi-criteria scenario, we compare our unified model with the BiLSTM  and Switch-LSTMs (Gong et al., 2018). The lower block of Table 4 displays the contrast. Firstly, although different criteria are trained together, our unified model achieves better performance besides CTB. Compared to the single-criterion scenario, 0.42 gain in average F 1 score is obtained by the multi-criteria scenario. Moreover, our unified model brings a significant improvement of 5.05 in OOV recall. Secondly, compared to previous MCCWS models, our unified model also achieves better average F 1 score. Especially, our unified model significantly outperforms the unified BiL-STM (He et al., 2019), which indicates the Transformer encoder is more effective in carrying the criterion information than BiLSTM. The reason is that the Transformer encoder can model the interaction of the criterion-token and each character directly, while BiLSTM needs to carry the criterion information step-by-step from the two ends to the middle of the input sentence. The criterion information could be lost for the long sentences.
There are about 200 sentences are shared by more than one datasets with different segmentation schemes, but it is not much harder to correctly segment them. Their F1 score is 96.84.    that each criterion is different from others. Among them, MSRA is obviously different from others. A possible reason is that the named entity is regarded as a whole word in the MSRA criterion, which is significantly distinguishing with other criteria.  Table 5 shows the effectiveness of each component in our model. The first ablation study is to verify the effectiveness of the CRF decoder, which is popular in most CWS models. The comparison between the first two lines indicates that with or without CRF does not make much difference. Since a model with CRF takes a longer time to train and inference, we suggest not to use CRF in Transformer encoder models in practice.

Ablation Study
The other two ablation studies are to evaluate the effect of the bigram feature and pre-trained embeddings. We can see that their effects vary in different datasets. Some datasets are more sensitive to the bigram feature, while others are more sensitive to pre-trained embeddings. In terms of average performance, the bigram feature and pre-trained embeddings are important and boost the performance considerably, but these two components do not have a clear winner.

Joint Training on both simplified and Traditional Corpora
In the above experiments, the traditional Chinese corpora (AS, CITYU, and CKIP) are translated into simplified Chinese. However, it might be more attractive to jointly train a unified model directly on the mixed corpora of simplified and traditional Chinese without translation. As a reference, the single model has been used to translate between multiple languages in the field of machine translation (Johnson et al., 2017).
To thoroughly investigate the feasibility of this idea, we study four different settings to train our model on simplified and traditional Chinese corpora.
1. The first setting ("8Simp") is to translate all  果凍(jelly) 梦想(dream) 夢想(dream) 担忧(concern) 擔憂(concern) 鲜果(fresh fruit) 京東 (  the corpora into simplified Chinese. For the pre-trained embeddings, we use the simplified Chinese Wikipedia dump to pre-train the unigram and bigram embeddings. This way is the same as the previous experiments. 2. The second setting ("8Trad") is to translate all the corpora into traditional Chinese. For the pre-trained embeddings, we first convert the Wikipedia dump into traditional Chinese characters, then we use this converted corpus to pre-train unigram and bigram embeddings. 3. The third setting ("5Simp, 3Trad") is to keep the original characters for five simplified Chinese corpora and three traditional Chinese corpora without translation. The unified model can take as input the simplified or traditional Chinese sentences. In this way, we pre-train the joint simplified and traditional Chinese embeddings in a joint embedding space. We merge the Wikipedia corpora used in "8Trad" and "8Simp" to form a mixed corpus, which contains both the simplified and traditional Chinese characters. The unigram and bigram embeddings are pre-trained on this mixed corpus. 4. The last setting ("8Simp, 8Trad") is to simultaneously train our model on both the eight simplified Chinese corpora in "8Simp" and the eight traditional Chinese corpora in "8Trad". The pre-trained word embeddings are the same as "5Simp, 3Trad". Table 6 shows that there does not exist too much difference between different settings. This investigation indicates it is feasible to train a unified model directly on two kinds of Chinese characters.
To better understand the quality of the learned joint embedding space of the simplified and traditional Chinese, we conduct a qualitative analysis to illustrate the most similar bigrams for a target bigram. Similar bigrams are retrieved based on the cosine similarity calculated using the learned embeddings. As shown in Table 7, the traditional Chinese bigrams are similar to their simplified Chinese counterparts, and vice versa. The results show that the simplified and traditional Chinese bigrams are aligned well in the joint embedding space.

Transfer Capability
Since except for the criterion embedding, the other parts of the unified model are the same for different criteria, we want to exploit whether a trained unified model can be transferred to a new criterion only by learning a new criterion embedding with few examples.
We use the leave-one-out strategy to evaluate the transfer capability of our unified model. We first train a model on seven datasets, then only learn the new criterion embedding with a few training instances from the left dataset. This scenario is also discussed in (Gong et al., 2018), and Figure  5 presents their and our outcomes (averaged F 1 score). There are two observations: Firstly, for the different number of samples, the transferred model always largely outperforms the models learned from scratch. We believe this indicates that learning a new criterion embedding is an effective way to transfer a trained unified model to a new criterion. Secondly, our model also has superior transferability than Switch-LSTMs (Ours (trans) versus Switch-LSTMs (trans) ).

Related Work
The previous work on the MCCWS can be categorized into two lines. One line is multi-task based MCCWS.  proposed a multi-criteria learning framework for CWS, which uses a shared layer to extract the common underlying features and a private layer for each criterion to extract criteria-specific features. Huang et al. (2019) proposed a domain adaptive segmenter to capture diverse criteria based on Bidirectional Encoder Representations from Transformer (BERT) (Devlin et al., 2018).
Another line is unified MCCWS. Gong et al. (2018) presented Switch-LSTMs to segment sentences, which consists of several LSTM layers, and uses a criterion switcher at every position to change the routing among these LSTMs automatically. However, the complexity of the model makes Switch-LSTMs hard to be applied in practice. He et al. (2019) used a shared BiLSTM by adding two artificial tokens at the beginning and end of an input sentence to specify the output criterion. However, due to the long-range dependency problem, BiL-STM is hard to carry the criterion information to each character in a long sentence.
Compared to the above two unified models, we use the Transformer encoder in our unified model, which can elegantly model the criterion-aware context representation for each character. With the Transformer, we just need a special criterion-token to specify the output criterion. Each character can directly attend the criterion-token to be aware of the target criterion. Thus, we can use a single model to produce different segmented results for different criteria. Different from (Huang et al., 2019), which uses the pre-trained Transformer BERT and several extra projection layers for different criteria, our model is a fully-shared and more concise.

Conclusion and Future Work
We propose a concise unified model for MCCWS, which uses the Transformer encoder to extract the criterion-aware representation according to a unique criterion-token. Experiments on eight corpora show that our proposed model outperforms the previous models and has a stronger transfer capability. The conciseness of our model makes it easy to be applied in practice.
In this work, we only adopt the vanilla Transformer encoder since we just want to utilize its selfattention mechanism to model the criterion-aware context representation for each character neatly. Therefore, it is promising for future work to look for the more effective adapted Transformer encoder for CWS task or to utilize the pre-trained models , such as BERT-based MCCWS (Ke et al., 2020). Besides, we are also planning to incorporate other sequence labeling tasks into the unified model, such as POS tagging and named entity recognition.