Variational Neural Discourse Relation Recognizer

,

Implicit discourse relation recognition is a crucial component for automatic discourselevel analysis and nature language understanding. Previous studies exploit discriminative models that are built on either powerful manual features or deep discourse representations. In this paper, instead, we explore generative models and propose a variational neural discourse relation recognizer. We refer to this model as VarNDRR. VarNDRR establishes a directed probabilistic model with a latent continuous variable that generates both a discourse and the relation between the two arguments of the discourse. In order to perform efficient inference and learning, we introduce neural discourse relation models to approximate the prior and posterior distributions of the latent variable, and employ these approximated distributions to optimize a reparameterized variational lower bound. This allows VarNDRR to be trained with standard stochastic gradient methods. Experiments on the benchmark data set show that VarNDRR can achieve comparable results against stateof-the-art baselines without using any manual features.

Introduction
Discourse relation characterizes the internal structure and logical relation of a coherent text. Automatically identifying these relations not only plays an important role in discourse comprehension and generation, but also obtains wide applications in many With the discourse connective because, these two sentences display an explicit discourse relation CONTINGENCY which can be inferred easily. Once this discourse connective is removed, however, the discourse relation becomes implicit and difficult to be recognized. This is because almost no surface information in these two sentences can signal this relation. For successful recognition of this relation, in the contrary, we need to understand the deep semantic correlation between disappointed and obligation in the two sentences above. Although explicit discourse relation recognition (DRR) has made great progress (Miltsakaki et al., 2005;Pitler et al., 2008), implicit DRR still remains a serious challenge due to the difficulty in semantic analysis.
Conventional approaches to implicit DRR often treat the relation recognition as a classification problem, where discourse arguments and relations are regarded as the inputs and outputs respectively. Generally, these methods first generate a representation for a discourse, denoted as x 1 (e.g., manual fea-amssymb amsmath z x y θ φ N Figure 1: Graphical illustration for VarNDRR. Solid lines denote the generative model p θ (x|z)p θ (y|z), dashed lines denote the variational approximation q φ (z|x, y) to the posterior p(z|x, y) and q φ (z|x) to the prior p(z) for inference. The variational parameters φ are learned jointly with the generative model parameters θ.
tures in SVM-based recognition (Pitler et al., 2009;Lin et al., 2009) or sentence embeddings in neural networks-based recognition (Ji and Eisenstein, 2015;Zhang et al., 2015)), and then directly model the conditional probability of the corresponding discourse relation y given x, i.e. p(y|x). In spite of their success, these discriminative approaches rely heavily on the goodness of discourse representation x. Sophisticated and good representations of a discourse, however, may make models suffer from overfitting as we have no large-scale balanced data.
Instead, we assume that there is a latent continuous variable z from an underlying semantic space. It is this latent variable that generates both discourse arguments and the corresponding relation, i.e. p(x, y|z). The latent variable enables us to jointly model discourse arguments and their relations, rather than conditionally model y on x. However, the incorporation of the latent variable makes the modeling difficult due to the intractable computation with respect to the posterior distribution.
Inspired by Kingma and Welling (2014) as well as  who introduce a variational neural inference model to the intractable posterior via optimizing a reparameterized variational lower bound, we propose a variational neural discourse relation recognizer (VarNDRR) with a latent continuous variable for implicit DRR in this paper. The key idea behind VarNDRR is that although the posterior distribution is intractable, we can approximate it via a deep neural network. Figure 1 illustrates the treat them as univariate variables in most cases. Additionally, we use bold symbols to denote variables, and plain symbols to denote values. graph structure of VarNDRR. Specifically, there are two essential components: • neural discourse recognizer As a discourse x and its corresponding relation y are independent with each other given the latent variable z (as shown by the solid lines), we can formulate the generation of x and y from z in the equation p θ (x, y|z) = p θ (x|z)p θ (y|z). These two conditional probabilities on the right hand side are modeled via deep neural networks (see section 3.1). • neural latent approximator VarNDRR assumes that the latent variable can be inferred from discourse arguments x and relations y (as shown by the dash lines). In order to infer the latent variable, we employ a deep neural network to approximate the posterior q φ (z|x, y) as well as the prior q φ (z|x) (see section 3.2), which makes the inference procedure efficient. We further employ a reparameterization technique to sample z from q φ (z|x, y) that not only bridges the gap between the recognizer and the approximator but also allows us to use the standard stochastic gradient ascent techniques for optimization (see section 3.3).
The main contributions of our work lie in two aspects. 1) We exploit a generative graphic model for implicit DRR. To the best of our knowledge, this has never been investigated before. 2) We develop a neural recognizer and two neural approximators specifically for implicit DRR, which enables both the recognition and inference to be efficient. We conduct a series of experiments for English implicit DRR on the PDTB-style corpus to evaluate the effectiveness of our proposed VarNDRR model. Experiment results show that our variational model achieves comparable results against several strong baselines in term of F1 score. Extensive analysis on the variational lower bound further reveals that our model can indeed fit the data set with respect to discourse arguments and relations.

Background: Variational Autoencoder
The variational autoencoder (VAE) , which forms the basis of our model, is a generative model that can be regarded as a regularized version of the standard autoencoder. With a latent random variable z, VAE significantly changes the autoencoder architecture to be able to capture the variations in the observed variable x. The joint distribution of (x, z) is formulated as follows: where p θ (z) is the prior over the latent variable, usually equipped with a simple Gaussian distribution. p θ (x|z) is the conditional distribution that models the probability of x given the latent variable z. Typically, VAE parameterizes p θ (x|z) with a highly nonlinear but flexible function approximator such as a neural network. The objective of VAE is to maximize a variational lower bound as follows: where KL(Q||P ) is Kullback-Leibler divergence between two distributions Q and P . q φ (z|x) is an approximation of the posterior p(z|x) and usually follows a diagonal Gaussian N (µ, diag(σ 2 )) whose mean µ and variance σ 2 are parameterized by again, neural networks, conditioned on x.
To optimize Eq.
(2) stochastically with respect to both θ and φ, VAE introduces a reparameterization trick that parameterizes the latent variable z with the Gaussian parameters µ and σ in q φ (z|x): where is a standard Gaussian variable, and denotes an element-wise product. Intuitively, VAE learns the representation of the latent variable not as single points, but as soft ellipsoidal regions in latent space, forcing the representation to fill the space rather than memorizing the training data as isolated representations. With this trick, the VAE model can be trained through standard backpropagation technique with stochastic gradient ascent.

The VarNDRR Model
This section introduces our proposed VarNDRR model. Formally, in VarNDRR, there are two observed variables, x for a discourse and y for the corresponding relation, and one latent variable z. As Figure 2: Neural networks for conditional probabilities p θ (x|z) and p θ (y|z). The gray color denotes real-valued representations while the white and black color 0-1 representations.
illustrated in Figure 1, the joint distribution of the three variables is formulated as follows: We begin with this distribution to elaborate the major components of VarNDRR.

Neural Discourse Recognizer
The conditional distribution p(x, y|z) in Eq. (4) shows that both discourse arguments and the corresponding relation are generated from the latent variable. As shown in Figure 1, x is d-separated from y by z. Therefore the discourse x and the corresponding relation y is independent given the latent variable z. The joint probability can be therefore formulated as follows: We use a neural model q φ (z|x) to approximate the prior p(z) conditioned on the discourse x (see the following section). With respect to the other two conditional distributions, we parameterize them via neural networks as shown in Figure 2. Before we describe these neural networks, it is necessary to briefly introduce how discourse relations are annotated in our training data. The PDTB corpus, used as our training data, annotates implicit discourse relations between two neighboring arguments, namely Arg1 and Arg2. In VarNDRR, we represent the two arguments with bag-of-word representations, and denote them as x 1 and x 2 .
To model p θ (x|z) (the bottom part in Figure 2), we project the representation of the latent variable z ∈ R dz onto a hidden layer: is an element-wise activation function, such as tanh(·), which is used throughout our model. Upon this hidden layer, we further stack a Sigmoid layer to predict the probabilities of corresponding discourse arguments: here, x 1 ∈ R dx 1 and x 2 ∈ R dx 2 are the realvalued representations of the reconstructed x 1 and x 2 respectively. 2 We assume that p θ (x|z) is a multivariate Bernoulli distribution because of the bagof-word representation. Therefore the logarithm of p(x|z) is calculated as the sum of probabilities of words in discourse arguments as follows: where u i,j is the jth element in u i . In order to estimate p θ (y|z) (the top part in Figure 2), we stack a softmax layer over the multilayerperceptron-transformed representation of the latent variable z: y ∈ R dy , and d y denotes the number of discourse relations. MLP projects the representation of latent variable z into a d m -dimensional space through four internal layers, each of which has dimension d m .
Suppose that the true relation is y ∈ R dy , the logarithm of p(y|z) is defined as: Figure 3: Neural networks for Gaussian parameters µ and log σ in the approximated posterior q φ (z|x, y) and prior q φ (z|x).
In order to precisely estimate these conditional probabilities, our model will force the representation z of the latent variable to encode semantic information for both the reconstructed discourse x (Eq. (8)) and predicted discourse relation y (Eq. (10)), which is exactly what we want.

Neural Latent Approximator
For the joint distribution in Eq. (5), we can define a variational lower bound that is similar to Eq. (2). The difference lies in two aspects: the approximate prior q φ (z|x) and posterior q φ (z|x, y). We model both distributions as a multivariate Gaussian distribution with a diagonal covariance structure: The mean µ and s.d. σ of the approximate distribution are the outputs of the neural network as shown in Figure 3, where the prior and posterior have different conditions and independent parameters.
Approximate Posterior q φ (z|x, y) is modeled to condition on both observed variables: the discourse arguments x and relations y. Similar to the calculation of p θ (x|z), we first transform the input x and y into a hidden representation: We then obtain the Gaussian parameters of the posterior µ and log σ 2 through linear regression: where µ, σ ∈ R dz . In this way, this posterior approximator can be efficiently computed.
Approximate Prior q φ (z|x) is modeled to condition on discourse arguments x alone. This is based on the observation that discriminative models are able to obtain promising results using only x. Therefore, assuming the discourse arguments encode the prior information for discourse relation recognition is meaningful.
The neural model for prior q φ (z|x) is the same as that (i.e. Eq (11) and (12)) for posterior q φ (z|x, y) (see Figure 3), except for the absence of discourse relation y. For clarity , we use µ and σ to denote the mean and s.d. of the approximate prior.
With the parameters of Gaussian distribution, we can access the representation z using different sampling strategies. However, traditional sampling approaches often breaks off the connection between recognizer and approximator, making the optimization difficult. Instead, we employ the reparameterization trick  as in Eq. (3). During training, we sample the latent variable usingz = µ + σ ; during testing, however, we employ the expectation of z in the approximate prior distribution, i.e. setz = µ to avoid uncertainty.

Parameter Learning
We employ the Monte Carlo method to estimate the expectation over the approximate posterior, that is E q φ (z|x,y) [log p θ (x, y|z)]. Given a training instance (x (t) , y (t) ), the joint training objective is defined: L is the number of samples. The first term is the KL divergence of two Gaussian distributions which can be computed and differentiated without estimation.
Algorithm 1 Parameter Learning Algorithm of VarNDRR.
Inputs: A, the maximum number of iterations; M , the number of instances in one batch; L, the number of samples; Maximizing this objective will minimize the difference between the approximate posterior and prior, thus making the settingz = µ during testing reasonable. The second term is the approximate expectation of E q φ (z|x,y) [log p θ (x, y|z)], which is also differentiable.
As the objective function in Eq. (13) is differentiable, we can optimize both the model parameters θ and variational parameters φ jointly using standard gradient ascent techniques. The training procedure for VarNDRR is summarized in Algorithm 1.

Experiments
We conducted experiments on English implicit DRR task to validate the effectiveness of VarNDRR. 4

Dataset
We used the largest hand-annotated discourse corpus PDTB 2.0 5 (Prasad et al., 2008) (PDTB hereafter). This corpus contains discourse annotations over 2,312 Wall Street Journal articles, and is organized in different sections. Following previous work (Pitler et al., 2009;Zhou et al., 2010;Lan et 4   In PDTB, discourse relations are annotated in a predicate-argument view. Each discourse connective is treated as a predicate that takes two text spans as its arguments. The discourse relation tags in PDTB are arranged in a three-level hierarchy, where the top level consists of four major semantic classes: TEMPORAL (TEM), CONTINGENCY (CON), EX-PANSION (EXP) and COMPARISON (COM). Because the top-level relations are general enough to be annotated with a high inter-annotator agreement and are common to most theories of discourse, in our experiments we only use this level of annotations.
We formulated the task as four separate oneagainst-all binary classification problems: each top level class vs. the other three discourse relation classes. We also balanced the training set by resampling training instances in each class until the number of positive and negative instances are equal. In contrast, all instances in the test and development set are kept in nature. The statistics of various data sets is listed in Table 1.

Setup
We tokenized all datasets using Stanford NLP Toolkit 6 . For optimization, we employed the Adam for all experiments. 7 . All parameters of VarNDRR are initialized by a Gaussian distribution (µ = 0, σ = 0.01). For Adam, we set β 1 = 0.9, β 2 = 0.999 with a learning rate 0.001. Additionally, we tied the following parameters in practice: W h 1 and W h 2 , W x 1 and W x 2 .
We compared VarNDRR against the following two different baseline methods: • SVM: a support vector machine (SVM) classifier 8 trained with several manual features. • SCNN: a shallow convolutional neural network proposed by Zhang et al. (2015).
We also provide results from two state-of-the-art systems: • Rutherford and Xue (2015) convert explicit discourse relations into implicit instances.   Features used in SVM are taken from the stateof-the-art implicit discourse relation recognition model, including Bag of Words, Cross-Argument Word Pairs, Polarity, First-Last, First3, Production Rules, Dependency Rules and Brown cluster pair (Rutherford and Xue, 2014). In order to collect bag of words, production rules, dependency rules, and cross-argument word pairs, we used a frequency cutoff of 5 to remove rare features, following Lin et al. (2009).

Classification Results
Because the development and test sets are imbalanced in terms of the ratio of positive and negative instances, we chose the widely-used F1 score as our major evaluation metric. In addition, we also provide the precision, recall and accuracy for further analysis. Table 2 summarizes the classification results.
From Table 2, we observe that the proposed VarN-DRR outperforms SVM on COM/EXP/TEM and SCNN on EXP/COM according to their F1 scores. Although it fails on CON, VarNDRR achieves the best result on EXP/COM among these three models. Overall, VarNDRR is competitive in comparison with these two baselines. With respect to the accuracy, our model does not yield substantial im-provements over the two baselines. This may be because that we used the F1 score rather than the accuracy, as our selection criterion on the development set. With respect to the precision and recall, our model tends to produce relatively lower precisions but higher recalls. This suggests that the improvements of VarNDRR in terms of F1 scores mostly benefits from the recall values.
Comparing with the state-of-the-art results of previous work (Ji and Eisenstein, 2015;Rutherford and Xue, 2015), VarNDRR achieves comparable results in term of the F1 scores. Specifically, VarNDRR outperforms Rutherford and Xue (2015) on EXP, and Ji and Eisenstein (2015) on TEM. However, the accuracy of our model fails to surpass these models. We argue that this is because both baselines use many manual features designed with prior human knowledge, but our model is purely neural-based.
Additionally, we find that the performance of our model is proportional to the number of training instances. This suggests that collecting more training instances (in spite of the noises) may be beneficial to our model.

Variational Lower Bound Analysis
In addition to the classification performance, the efficiency in learning and inference is another concern for variational methods. Figure 4 shows the training procedure for four tasks in terms of the variational lower bound on the training set. We also provide F1 scores on the development set to investigate the relations between the variational lower bound and recognition performance.
We find that our model converges toward the variational lower bound considerably fast in all experiments (within 100 epochs), which resonates with the previous findings . However, the change trend of the F1 score does not follow that of the lower bound which takes more time to converge. Particularly to the four discourse relations, we further observe that the change paths of the F1 score are completely different. This may suggest that the four discourse relations have different properties and distributions.
In particular, the number of epochs when the best F1 score reaches is also different for the four discourse relations. This indicates that dividing the implicit DRR into four different tasks according to the type of discourse relations is reasonable and better than performing DRR on the mixtures of the four relations.

Related Work
There are two lines of research related to our work: implicit discourse relation recognition and variational neural model, which we describe in succession.
Implicit Discourse Relation Recognition Due to the release of Penn Discourse Treebank (Prasad et al., 2008) Lin et al. (2009) further incorporate context words, word pairs as well as discourse parse information into their classifier. Following this direction, several more powerful features have been exploited: entities (Louis et al., 2010), word embeddings (Braud and Denis, 2015), Brown cluster pairs and co-reference patterns (Rutherford and Xue, 2014). With these features, Park and Cardie (2012) perform feature set optimization for better feature combination.
Different from feature engineering, predicting discourse connectives can indirectly help the relation classification (Zhou et al., 2010;Patterson and Kehler, 2013). In addition, selecting explicit discourse instances that are similar to the implicit ones can enrich the training corpus for implicit DRR and gains improvement (Wang et al., 2012;Lan et al., 2013;Braud and Denis, 2014;Fisher and Simmons, 2015;Rutherford and Xue, 2015). Very recently, neural network models have been also used for implicit DRR due to its capability for representation learning (Ji and Eisenstein, 2015;Zhang et al., 2015). Despite their successes, most of them focus on the discriminative models, leaving the field of generative models for implicit DRR a relatively uninvestigated area. In this respect, the most related work to ours is the latent variable recurrent neural network recently proposed by Ji et al. (2016). However, our work differs from theirs significantly, which can be summarized in the following three aspects: 1) they employ the recurrent neural network to represent the discourse arguments, while we use the simple feedforward neural network; 2) they treat the discourse relations directly as latent variables, rather than the underlying semantic representation of discourses; 3) their model is optimized in terms of the data likelihood, since the discourse relations are observed during training. However, VarNDRR is optimized under the variational theory.
Variational Neural Model In the presence of continuous latent variables with intractable posterior distributions, efficient inference and learning in directed probabilistic models is required. Kingma and Welling (2014) as well as  introduce variational neural networks that employ an approximate inference model for intractable posterior and reparameterized variational lower bound for stochastic gradient optimization.  revisit the approach to semi-supervised learning with generative models and further develop new models that allow effective generalization from a small labeled dataset to a large unlabeled dataset. Chung et al. (2015) incorporate latent variables into the hidden state of a recurrent neural network, while Gregor et al. (2015) combine a novel spatial attention mechanism that mimics the foveation of human eyes, with a sequential variational auto-encoding framework that allows the iterative construction of complex images.
We follow the spirit of these variational models, but focus on the adaptation and utilization of them onto implicit DRR, which, to the best of our knowledge, is the first attempt in this respect.

Conclusion and Future Work
In this paper, we have presented a variational neural discourse relation recognizer for implicit DRR. Different from conventional discriminative models that directly calculate the conditional probability of the relation y given discourse arguments x, our model assumes that it is a latent variable from an underlying semantic space that generates both x and y. In order to make the inference and learning efficient, we introduce a neural discourse recognizer and two neural latent approximators as our generative and inference model respectively. Using the reparameterization technique, we are able to optimize the whole model via standard stochastic gradient ascent algorithm. Experiment results in terms of classification and variational lower bound verify the effectiveness of our model.
In the future, we would like to exploit the utilization of discourse instances with explicit relations for implicit DRR. For this we can start from two directions: 1) converting explicit instances into pseudo implicit instances and retraining our model; 2) developing a semi-supervised model to leverage semantic information inside discourse arguments. Furthermore, we are also interested in adapting our model to other similar tasks, such as nature language inference.