Nonparametric Spherical Topic Modeling with Word Embeddings

Traditional topic models do not account for semantic regularities in language. Recent distributional representations of words exhibit semantic consistency over directional metrics such as cosine similarity. However, neither categorical nor Gaussian observational distributions used in existing topic models are appropriate to leverage such correlations. In this paper, we propose to use the von Mises-Fisher distribution to model the density of words over a unit sphere. Such a representation is well-suited for directional data. We use a Hierarchical Dirichlet Process for our base topic model and propose an efficient inference algorithm based on Stochastic Variational Inference. This model enables us to naturally exploit the semantic structures of word embeddings while flexibly discovering the number of topics. Experiments demonstrate that our method outperforms competitive approaches in terms of topic coherence on two different text corpora while offering efficient inference.


Introduction
Prior work on topic modeling has mostly involved the use of categorical likelihoods (Blei et al., 2003;Blei and Lafferty, 2006;Rosen-Zvi et al., 2004).Applications of topic models in the textual domain treat words as discrete observations, ignoring the semantics of the language.Recent developments in distributional representations of words (Mikolov et al., 2013;Pennington et al., 2014) have succeeded in capturing certain semantic regularities, but have not been explored exten-One way to employ semantic similarity is to use the Euclidean distance between word vectors, which reduces to a Gaussian observational distribution for topic modeling (Das et al., 2015).The cosine distance between word embeddings is another popular choice and has been shown to be a good measure of semantic relatedness (Mikolov et al., 2013;Pennington et al., 2014).The von Mises-Fisher (vMF) distribution is well-suited to model such directional data (Dhillon and Sra, 2003;Banerjee et al., 2005) but has not been previously applied to topic models.
In this work, we use vMF as the observational distribution.Each word can be viewed as a point on a unit sphere with topics being canonical directions.More specifically, we use a Hierarchical Dirichlet Process (HDP) (Teh et al., 2006), a Bayesian nonparametric variant of Latent Dirichlet Allocation (LDA), to automatically infer the number of topics.We implement an efficient inference scheme based on Stochastic Variational Inference (SVI) (Hoffman et al., 2013).
We perform experiments on two different English text corpora: 20 NEWSGROUPS and NIPS and compare against two baselines -HDP and Gaussian LDA.Our model, spherical HDP (sHDP), outperforms all three systems on the measure of topic coherence.For instance, sHDP obtains gains over Gaussian LDA of 97.5% on the NIPS dataset and 65.5% on the 20 NEWSGROUPS dataset.Qualitative inspection reveals consistent topics produced by sHDP.We also empirically demonstrate that employing SVI leads to efficient topic inference.

arXiv:1604.00126v1 [cs.CL] 1 Apr 2016
Topic modeling and word embeddings Das et al. (2015) proposed a topic model which uses a Gaussian distribution over word embeddings.By performing inference over the vector representations of the words, their model is encouraged to group words that are semantically similar, leading to more coherent topics.In contrast, we propose to utilize von Mises-Fisher (vMF) distributions which rely on the cosine similarity between the word vectors instead of euclidean distance.
vMF in topic models The vMF distribution has been used to model directional data by placing points on a unit sphere (Dhillon and Sra, 2003).Reisinger et al. (2010) propose an admixture model that uses vMF to model documents represented as vector of normalized word frequencies.This does not account for word level semantic similarities.Unlike their method, we use vMF over word embeddings.In addition, our model is nonparametric.
Nonparametric topic models HDP and its variants have been successfully applied to topic modeling (Paisley et al., 2015;Blei, 2012;He et al., 2013); however, all these models assume a categorical likelihood in which the words are encoded as one-hot representation.

Model
In this section, we describe the generative process for documents.Rather than one-hot representation of words, we employ normalized word embeddings (Mikolov et al., 2013) to capture semantic meanings of associated words.Word n from document d is represented by a normalized Mdimensional vector x dn and the similarity between words is quantified by the cosine of angle between the corresponding word vectors.
Our model is based on the Hierarchical Dirichlet Process (HDP).The model assumes a collection of "topics" that are shared across documents in the corpus.The topics are represented by the topic centers µ k ∈ R M .Since word vectors are normalized, the µ k can be viewed as a direction on unit sphere.Von Mises−Fisher (vMF) is a distribution that is commonly used to model directional data.The likelihood of the topic k for word x dn is: Figure 1: Graphical representation of our spherical HDP (sHDP) model.The symbol next to each random variable denotes the parameter of its variational distribution.We assume D documents in the corpus, each document contains N d words and there are countably infinite topics represented by where κ k is the concentration of the topic k, the ) is the normalization constant, and I ν (•) is the modified Bessel function of the first kind at order ν.Interestingly, the log-likelihood of the vMF is proportional to µ T k x dn (up to a constant), which is equal to the cosine distance between two vectors.This distance metric is also used in Mikolov et al. (2013) to measure semantic proximity.
When sampling a new document, a subset of topics determine the distribution over words.We let z dn denote the topic selected for the word n of document d.Hence, z dn is drawn from a categorical distribution: z dn ∼ Mult(π d ), where π d is the proportion of topics for document d.We draw π d from a Dirichlet Process which enables us to estimate the the number of topics from the data.The generative process for the generation of new document is as follows: where GEM(γ) is the stick-breaking distribution with concentration parameter γ, DP(α, β) is a Dirichlet process with concentration parameter α and stick proportions β (Teh et al., 2012).We use log-normal and vMF as hyper-prior distributions for the concentrations (κ k ) and centers of the topics (µ k ) respectively.Figure 1 provides a graphical illustration of the model.

Stochastic variational inference
In the rest of the paper, we use bold symbols to denote the variables of the same kind (e.g., ).We employ stochastic variational mean-field inference (SVI) (Hoffman et al., 2013) to estimate the posterior distributions of the latent variables.SVI enables us to sequentially process batches of documents which makes it appropriate in large-scale settings.
To approximate the posterior distribution of the latent variables, the mean-field approach finds the optimal parameters of the fully factorizable q (i.e., q(z, β, π, µ, κ) := q(z)q(β)q(π)q(µ)q(κ)) by maximizing the Evidence Lower Bound (ELBO), where E q [•] is expectation with respect to q, p(X, z, β, π, µ, κ) is the joint likelihood of the model specified by the HDP model.
The variational distributions for z, π, µ have the following parametric forms, where Dir denotes the Dirichlet distribution and ϕ, θ, ψ and λ are the parameters we need to optimize the ELBO.Similar to (Bryant and Sudderth, 2012), we view β as a parameter; hence, q(β) = δ β * (β).The prior distribution κ does not follow a conjugate distribution; hence, its posterior does not have a closed-form.Since κ is only one dimensional variable, we use importance sampling to approximate its posterior.For a batch size of one (i.e., processing one document at time), the update equations for the parameters are: where D, ω wj , W , ρ are the total number of documents, number of word w in document j, the total number of words in the dictionary, and the step size, respectively.t is a natural parameter for vMF and s(x d , ϕ dk ) is a function computing the sufficient statistics of vMF distribution of the topic k.

Model
Topic We use numerical gradient ascent to optimize for β * (see Gopal and Yang (2014) for exact forms of

Experiments
Setup We perform experiments on two different text corpora: 11266 documents from 20 NEWS-GROUPS 1 and 1566 documents from the NIPS corpus2 .We utilize 50-dimensional word embeddings trained on text from Wikipedia using word2vec3 .The vectors are post-processed to have unit 2norm.We evaluate our model using the measure of topic coherence (Newman et al., 2010), which has been shown to effectively correlate with human judgement (Lau et al., 2014).For this, we compute the Pointwise Mutual Information (PMI) using a reference corpus of 300k documents from Wikipedia.The PMI is calculated using cooccurence statistics over pairs of words (u i , u j ) in 20-word sliding windows: We compare our model with two baselines: HDP and the Gaussian LDA model .We ran G-LDA with various number of topics (k).
Results Table 2 details the topic coherence averaged over all topics produced by each model.We observe that our sHDP model outperforms G-LDA by 0.08 points on 20 NEWSGROUPS and by 0.17 points on the NIPS dataset.We can also see that the individual topics inferred by sHDP

G-LDA sHDP
Figure 2: Normalized log-likelihood (in percentage) over a training set of size 1566 documents from the NIPS corpus.Since the log-likelihood values are not comparable for the Gaussian LDA and the sHDP, we normalize them to demonstrate the convergence speed of the two inference schemes for these models.make sense qualitatively and have higher coherence scores than G-LDA (Table 1).This supports our hypothesis that using the vMF likelihood helps in producing more coherent topics.sHDP produces 16 topics for the 20 NEWSGROUPS and 92 topics on the NIPS dataset.
Figure 2 shows a plot of normalized loglikelihood against the runtime of sHDP and G-LDA. 4We calculate the normalized value of loglikelihood by subtracting the minimum value from it and dividing it by the difference of maximum and minimum values.We can see that sHDP converges faster than G-LDA, requiring only around five iterations while G-LDA takes longer to converge.

Conclusion
Classical topic models do not account for semantic regularities in language.Recently, distributional representations of words have emerged that exhibit semantic consistency over directional metrics like cosine similarity.Neither categorical nor Gaussian observational distributions used in existing topic models are appropriate to leverage such correlations.In this work, we demonstrate the use of the von Mises-Fisher distribution to model words as points over a unit sphere.We use HDP as the base topic model and propose an efficient algorithm based on Stochastic Variational Inference.Our model naturally exploits the semantic structures of word embeddings while flexibly inferring the number of topics.We show that our method outperforms three competitive approaches in terms of topic coherence on two different datasets.

Table 1 :
Examples of top words for the most coherent topics (column-wise) inferred on the NIPS dataset by Gaussian LDA (k=40) and Spherical HDP.The last row for each model is the topic coherence (PMI) computed using Wikipedia documents as reference.