Semi-supervised Clustering for Short Text via Deep Representation Learning

In this work, we propose a semi-supervised method for short text clustering, where we represent texts as distributed vectors with neural networks, and use a small amount of labeled data to specify our intention for clustering. We design a novel objective to combine the representation learning process and the k-means clustering process together, and optimize the objective with both labeled data and unlabeled data iteratively until convergence through three steps: (1) assign each short text to its nearest centroid based on its representation from the current neural networks; (2) re-estimate the cluster centroids based on cluster assignments from step (1); (3) update neural networks according to the objective by keeping centroids and cluster assignments fixed. Experimental results on four datasets show that our method works significantly better than several other text clustering methods.


Introduction
Text clustering is a fundamental problem in text mining and information retrieval.Its task is to group similar texts together such that texts within a cluster are more similar to texts in other clusters.Usually, a text is represented as a bag-of-words or term frequency-inverse document frequency (TF-IDF) vector, and then the k-means algorithm (Mac-Queen, 1967) is performed to partition a set of texts into homogeneous groups.
However, when dealing with short texts, the characteristics of short text and clustering task raise several issues for the conventional unsupervised clustering algorithms.First, the number of uniqe words (a) What's the color of apples?(b) When will this apple be ripe?(c) Do you like apples?(d) What's the color of oranges?(e) When will this orange be ripe?(f) Do you like oranges? in each short text is small, as a result, the lexcical sparsity issue usually leads to poor clustering quality (Dhillon and Guan, 2003).Second, for a specific short text clustering task, we have prior knowledge or paticular intenstions before clustering, while fully unsupervised approaches may learn some classes the other way around.Take the sentences in Table 1 for example, those sentences can be clustered into different partitions based on different intentions: apple {a, b, c} and orange {d, e, f} with a fruit type intension, or what-question {a, d}, when-question {b, e}, and yes/no-question cluster {c, f} with a question type intension.
To address the lexical sparity issue, one direction is to enrich text representations by extracting features and relations from Wikipedia (Banerjee et al., 2007) or an ontology (Fodeh et al., 2011).But this approach requires the annotated knowlege, which is also language dependent.So the other direction, which directly encode texts into distributed vectors with neural networks (Hinton and Salakhutdinov, 2006;Xu et al., 2015), becomes more interesing.To tackle the second problem, semi-supervised approaches (e.g.(Bilenko et al., 2004;Davidson and Basu, 2007;Bair, 2013)) have gained significant popularity in the past decades.Our question is can we have a unified model to integrate netural networks into the semi-supervied framework?
In this paper, we propose a unified framework for the short text clustering task.We employ a deep neural network model to represent short sentences, and integrate it into a semi-supervised algorithm.Concretely, we extend the objective in the classical unsupervised k-means algorithm by adding a penalty term from labeled data.Thus, the new objective covers three key groups of parameters: centroids of clusters, the cluster assignment for each text, and the parameters within deep neural networks.In the training procedure, we start from random initialization of centroids and neural networks, and then optimize the objective iteratively through three steps until converge: (1) assign each short text to its nearest centroid based on its representation from the current neural networks; (2) re-estimate cluster centroids based on cluster assignments from step (1); (3) update neural networks according to the objective by keeping centroids and cluster assignments fixed.
Experimental results on four different datasets show that our method achieves significant improvements over several other text clustering methods.
In following parts, we first describe our neural network models for text representaion (Section 2).Then we introduce our semi-supervised clustering method and the learning algorithm (Section 3).Finally, we evaluate our method on four different datasets (Section 4).

Representation Learning for Short Texts
We represent each word with a dense vector w, so that a short text s is first represented as a matrix S = [w 1 , ..., w |s| ], which is a concatenation of all vectors of w in s, |s| is the length of s.Then we design two different types of neural networks to ingest the word vector sequence S: the convolutional neural networks (CNN) and the long short-term memory (LSTM).More formally, we define the presentation function as x = f (s), where x is the represent vector  of the text s.We test two encoding functions (CNN and LSTM) in our experiments.
Inspired from Kim (2014), our CNN model views the sequence of word vectors as a matrix, and applies two sequential operations: convolution and maxpooling.Then, a fully connected layer is employed to convert the final representation vector into a fixed size.Figure 1 gives the diagram of the CNN model.In the convolution operation, we define a list of filters {w o }, where the shape of each filter is d × h, d is the dimension of word vectors and h is the window size.Each filter is applied to a patch (a window size h of vectors) of S, and generates a feature.We apply this filter to all possible patches in S, and produce a series of features.The number of features depends on the shape of the filter w o and the length of the input short text.To deal with variable feature size, we perform a max-pooling operation over all the features to select the maximum value.Therefore, after the two operations, each filter generates only one feature.We define several filters by varying the window size and the initial values.Thus, a vector of features is captured after the max-pooling operation, and the feature dimension is equal to the number of filters.
Figure 2 gives the diagram of our LSTM model.We implement the standard LSTM block described in Graves (2012).Each word vector is fed into the LSTM model sequentially, and the mean of the hidden states over the entire sentence is taken as the final representation vector.
3 Semi-supervised Clustering for Short Texts

Revisiting K-means Clustering
Given a set of texts {s 1 , s 2 , ..., s N }, we represent them as a set of data points {x 1 , x 2 , ..., x N }, where x i can be a bag-of-words or TF-IDF vector in traditional approaches, or a dense vector in Section 2. The task of text clustering is to partition the data set into some number K of clusters, such that the sum of the squared distance of each data point to its closest cluster centroid is minimized.For each data point x n , we define a set of binary variables r nk ∈ {0, 1}, where k ∈ {1, ..., K} describing which of the K clusters x n is assigned to.So that if x n is assigned to cluster k, then r nk = 1, and r nj = 0 for j = k.Let's define µ k as the centroid of the k-th cluster.We can then formulate the objective function as Our goal is the find the values of {r nk } and {µ k } so as to minimize J unsup .The k-means algorithm optimizes J unsup through the gradient descent approach, and results in an iterative procedure (Bishop, 2006).Each iteration involves two steps: E-step and M-step.In the Estep, the algorithm minimizes J unsup with respect to {r nk } by keeping {µ k } fixed.J unsup is a linear function for {r nk }, so we can optimize for each data point separately by simply assigning the n-th data point to the closest cluster centroid.In the M-step, the algorithm minimizes J unsup with respect to {µ k } by keeping {r nk } fixed.J unsup is a quadratic function of {µ k }, and it can be minimized by setting its derivative with respect to {µ k } to zero.
Then, we can easily solve {µ k } as In other words, µ k is equal to the mean of all the data points assigned to cluster k.

Semi-supervised K-means with Neural Networks
The classical k-means algorithm only uses unlabeled data, and solves the clustering problem under the unsupervised learning framework.As already mentioned, the clustering results may not be consistent to our intention.In order to acquire useful clustering results, some supervised information should be introduced into the learning procedure.To this end, we employ a small amount of labeled data to guide the clustering process.
Following Section 2, we represent each text s as a dense vector x via neural networks f (s).Instead of training the text representation model separately, we integrate the training process into the k-means algorithm, so that both the labeled data and the unlabeled data can be used for representation learning and text clustering.Let us denote the labeled data set as {(s 1 , y 1 ), (s 2 , y 2 ), ..., (s L , y L )}, and the unlabeled data set as {s L+1 , s L+2 , ..., s N }, where y i is the given label for s i .We then define the objective function as:  The objective function contains two terms.The first term is adapted from the unsupervised k-means algorithm in Eq. ( 1), and the second term is defined to encourage labeled data being clustered in correlation with the given labels.α ∈ [0, 1] is used to tune the importance of unlabeled data.The second term contains two parts.The first part penalizes large distance between each labeled instance and its correct cluster centroid, where g n = G(y n ) is the cluster ID mapped from the given label y n , and the mapping function G(•) is implemented with the Hungarian algorithm (Munkres, 1957).The second part is denoted as a hinge loss with a margin l, where [x] + = max(x, 0).This part incurs some loss if the distance to the correct centroid is not shorter (by the margin l) than distances to any of incorrect cluster centroids.
There are three groups of parameters in J semi : the cluster assignment of each text {r nk }, the cluster centroids {µ k }, and the parameters within the neural network model f (•).Our goal is the find the values of {r nk }, {µ k } and parameters in f (•), so as to minimize J semi .Inspired from the k-means algorithm, we design an algorithm to successively minimize J semi with respect to {r nk }, {µ k }, and parameters in f (•).Table 2 gives the corresponding pseudocode.First, we initialize the cluster centroids {µ k } with the k-means++ strategy (Arthur and Vassilvitskii, 2007), and randomly initialize all the parameters in the neural network model.Then, the algorithm iteratively goes through three steps (assign cluster, estimate centroid, and update parameter) until J semi converges.
The assign cluster step minimizes J semi with respect to {r nk } by keeping f (•) and {µ k } fixed.Its goal is to assign a cluster ID for each data point.We can see that the second term in Eq. ( 4) has no relation with {r nk }.Thus, we only need to minimize the first term by assigning each text to its nearest cluster centroid, which is identical to the E-step in the k-means algorithm.In this step, we also calculate the mappings between the given labels {y i } and the cluster IDs (with the Hungarian algorithm) based on cluster assignments of all labeled data.
The estimate centroid step minimizes J semi with respect to {µ k } by keeping {r nk } and f (•) fixed, which corresponds to the M-step in the k-means algorithm.It aims to estimate the cluster centroids {µ k } based on the cluster assignments {r nk } from the assign cluster step.The second term in Eq. ( 4) makes each labeled instance involved in the estimating process of cluster centroids.By solving ∂J semi /∂µ k = 0, we get where The first term in the numerator of Eq. ( 5) is the contributions from all data points, and αr nk is the weight of s n for µ k .The second term is acquired from labeled data, and w nk is the weight of a labeled instance s n for µ k .
The update parameter step minimizes J semi with respect to f (•) by keeping {r nk } and {µ k } fixed, which has no counterpart in the k-means algorithm.The main goal is to update parameters for the text representation model.We take J semi as the loss function, and train neural networks with the Adam algorithm (Kingma and Ba, 2014  We evaluate our method on four short text datasets. (1) question type is the TREC question dataset (Li and Roth, 2002), where all the questions are classified into 6 categories: abbreviation, description, entity, human, location and numeric.( 2 (2015).We use all the 5,952 questions for the question type dataset.But the other three datasets contain too many instances (e.g.1,400,000 instances in yahoo answer).Running clustering experiments on such a large dataset is quite inefficient.Following the same solution in (Xu et al., 2015), we randomly choose 1,000 samples for each classes individually for the other three datasets.Within each dataset, we randomly sample 10% of the instances as labeled data, and evaluate the performance on the remaining 90% instances.Table 3 summarizes the statistics of these datasets.
In all experiments, we set the size of word vector dimension as d=300 1 , and pre-train the word vectors with the word2vec toolkit (Mikolov et al., 2013) on the English Gigaword (LDC2011T07).The number of clusters is set to be the same number of labels in the dataset.The clustering performance is eval-1 We tuned different dimensions for word vectors.When the size is small (50 or 100), performance drops significantly.When the size is larger (300, 500 or 1000), the curve flattens out.To make our model more efficient, we fixed it as 300.uated with two metrics: Adjusted Mutual Information (AMI) (Vinh et al., 2009) and accuracy (ACC) (Amigó et al., 2009).In order to show the statistical significance, the performance of each experiment is the average of 10 trials.

Model Properties
There are several hyper-parameters in our model, e.g., the output dimension of the text representation models, and the α in Eq. ( 4).The choice of these hyper-parameters may affect the final performance.
In this subsection, we present some experiments to demonstrate the properties of our model, and find a good configuration that we use to evaluate our final model.All the experiments in this subsection were performed on the question type dataset.First, we evaluated the effectiveness of the output dimension in text representation models.We switched the dimension size among {50, 100, 300, 500, 1000}, and fixed the other options as: α = 0.5, the filter types in the CNN model including {unigram, bigram, trigram} and 500 filters for each type.Figure 3 presents the AMIs from both CNN and LSTM models.We found that 100 is the best output dimension for both CNN and LSTM models.Therefore, we set the output dimension as 100 in the following experiments.
Second, we studied the effect of α in Eq. ( 4), which tunes the importance of unlabeled data.We varied α among {0.00001, 0.0001, 0.001, 0.01, 0.1}, and remain the other options as the last experiment.Figure 4 shows the AMIs from both CNN and LSTM models.We found that the clustering performance is not good when using a very small α.By increasing the value of α, we acquired progressive improvements, and reached to the peak point at α=0.01.After that, the performance dropped.Therefore, we choose α=0.01 in the following experiments.This results also indicate that the unlabeled data are useful for the text representation learning process.Third, we tested the influence of the size of labeled data.We tuned the ratio of labeled instances from the whole dataset among [1%, 10%], and kept the other configurations as the previous experiment.The AMIs are shown in Figure 5.We can see that the more labeled data we use, the better performance we get.Therefore, the labeled data are quite useful for the clustering process.
Fourth, we checked the effect of the pre-training strategy for our models.We added a softmax layer on top of our CNN and LSTM models, where the size of the output layer is equal to the number of labels in the dataset.We then trained the model through the classification task using all labeled data.After this process, we removed the top layer, and used the remaining parameters to initialize our CNN and LSTM models.The performance for our models with and without pre-training strategy are given in Figure 6.We can see that the pre-training strategy is quite effective for our models.Therefore, we use the pre-training strategy in the following experiments.

Comparing with other Models
In this subsection, we compared our method with some representative systems.We implemented a series of clustering systems.All of these systems are based on the k-means algorithm, but they represent short texts differently: bow represents each text as a bag-of-words vector.
tf-idf represents each text as a TF-IDF vector.
average-vec represents each text with the average of all word vectors within the text.
metric-learn-bow employs the metric learning method proposed by Weinberger et al. (2005), and learns to project a bag-of-words vector into a 300-dimensional vector based on labeled data.
metric-learn-idf uses the same metric learning method, and learns to map a TF-IDF vector (d) semi-cnn into a 300-dimensional vector based on labeled data.
metric-learn-ave-vec also uses the metric learning method, and learns to project an averaged word vector into a 100-dimensional vector based on labeled data.
We designed two classifiers (cnn-classifier and lstm-classifier) by adding a softmax layer on top of our CNN and LSTM models.We trained these two classifiers with labeled data, and utilized them to predict labels for unlabeled data.We also built two text representation models ("cnn-represent."and "lstm-represent.")by setting parameters of our CNN and LSTM models with the corresponding parameters in cnn-classifier and lstm-classifier.Then, we used them to represent short texts into vectors, and applied the k-means algorithm for clustering.
Table 4 summarizes the results of all systems on each dataset, where "semi-cnn" is our semisupervised clustering algorithm with the CNN model, and "semi-lstm" is our semi-supervised clustering algorithm with the LSTM model.We grouped all the systems into three categories: unsupervised (Unsup.),supervised (Sup.), and semi-supervised (Semisup.) 2 .We found that the supervised systems worked much better than the unsupervised counterparts, which implies that the small amount of labeled data is necessary for better performance.We also noticed that within the supervised systems, the systems using deep learning (CNN or LSTM) models worked better than the systems using metric learning method, which shows the power of deep learning models for short text modeling.Our "semi-cnn" system got the best performance on almost all the datasets.
Figure 7 visualizes clustering results on the question type dataset from four representative systems.In Figure 7(a), clusters severely overlap with each other.When using the CNN sentence representation model, we can clearly identify all clusters in Figure 7(b), but the boundaries between clusters are 2 All clustering systems are based on the same number of instances (total# in Table 3).For the semi-supervised and supervised systems, the labels for 1% of the instances are given (labeled# in Table 3).And the evaluation was conducted only on the unlabeled portion.still obscure.The clustering results from our semisupervised clustering algorithm are given in Figure 7(c) and Figure 7(d).We can see that the boundaries between clusters become much clearer.Therefore, our algorithm is very effective for short text clustering.

Related Work
Existing semi-supervised clustering methods fall into two categories: constraint-based and representation-based.In constraint-based methods (Davidson and Basu, 2007), some labeled information is used to constrain the clustering process.In representation-based methods (Bair, 2013), a representation model is first trained to satisfy the labeled information, and all data points are clustered based on representations from the representation model.Bilenko et al. (2004) proposed to integrate there two methods into a unified framework, which shares the same idea of our proposed method.However, they only employed the metric learning model for representation learning, which is a linear projection.Whereas, our method utilized deep learning models to learn representations in a more flexible non-linear space.Xu et al. (2015) also employed deep learning models for short text clustering.However, their method separated the representation learning process from the clustering process, so it belongs to the representation-based method.Whereas, our method combined the representation learning process and the clustering process together, and utilized both labeled data and unlabeled data for representation learning and clustering.

Conclusion
In this paper, we proposed a semi-supervised clustering algorithm for short texts.We utilized deep learning models to learn representations for short texts, and employed a small amount of labeled data to specify our intention for clustering.We integrated the representation learning process and the clustering process into a unified framework, so that both of the two processes get some benefits from labeled data and unlabeled data.Experimental results on four datasets show that our method is more effective than other competitors.

Figure 1 :
Figure 1: CNN for text representation learning.
) ag news dataset contains short texts extracted from the AG's news corpus, where all the texts are classified into 4 categories: World, Sports, Business, and Sci/Tech (Zhang and LeCun, 2015).(3) dbpedia is the DBpedia ontology dataset, which is constructed by picking 14 non-overlapping classes from DBpedia 2014 (Lehmann et al., 2014).(4) yahoo answer is the 10 topics classification dataset extracted from Yahoo! Answers Comprehensive Questions and Answers version 1.0 dataset by Zhang and LeCun

Figure 3 :
Figure 3: Influence of the short text representation model, where the x-axis is the output dimension of the text representation models.

Figure 6 :
Figure 6: Influence of the pre-training strategy.

Table 1 :
Examples for short text clustering.

Table 3 :
Statistics for the short text datasets

Table 4 :
Performance of all systems on each dataset.