Neural Generative Question Answering

This paper presents an end-to-end neural network model, named Neural Generative Question Answering (GENQA), that can generate answers to simple factoid questions, based on the facts in a knowledge-base. More specifically, the model is built on the encoder-decoder framework for sequence-to-sequence learning, while equipped with the ability to enquire the knowledge-base, and is trained on a corpus of question-answer pairs, with their associated triples in the knowledge-base. Empirical study shows the proposed model can effectively deal with the variations of questions and answers, and generate right and natural answers by referring to the facts in the knowledge-base. The experiment on question answering demonstrates that the proposed model can outperform an embedding-based QA model as well as a neural dialogue model trained on the same data.


Introduction
Question answering (QA) can be viewed as a special case of single-turn dialogue: QA aims at providing correct answers to the questions in natural language, while dialogue emphasizes on generating relevant and fluent responses to the messages also in natural language [Shang et al., 2015;Vinyals and Le, 2015]. Recent progress in deep learning has raised the possibility of realizing generation-based QA in a purely neutralized way. That is, the answer is generated by a neural network (e.g., recurrent neural network, or RNN) based on the question, which is able to handle the flexibility and diversity of language. More importantly, the model is trained in an end-to-end fashion, and thus there is no need in building the system using linguistic knowledge, e.g., creating a semantic parser.
There is however one serious limitation of this generationbased approach to QA. It is practically impossible to store all the knowledge in a neural network to achieve a desired precision and coverage in real world QA. This is a fundamental difficulty, rooting deeply in the way in which knowledge is * The work is done when the first author worked as intern at Noah's Ark Lab, Huawei Technologies. acquired, represented and stored. The neural network, and more generally the fully distributed way of representation, is good at representing smooth and shared patterns, i.e., modeling the flexibility and diversity of language, but improper for representing discrete and isolated concepts, i.e., depicting the lexicon of language.
On the other hand, the recent success of memory-based neural network models has greatly extended the ways of storing and accessing text information, in both short-term memory (e.g., in [Bahdanau et al., 2015]) and long-term memory (e.g., in [Weston et al., 2015]). It is hence a natural choice to connect a neural model for QA with a neural model of knowledge-base on an external memory, which is also related to the traditional approach of template-based QA from knowledge-base.
In this paper, we report our exploration in this direction, with a proposed model called Neural Generative Question Answering (GENQA). The model can generate answers to simple factoid questions by accessing a knowledgebase. More specifically, the model is built on the encoderdecoder framework for sequence-to-sequence learning, while equipped with the ability to enquire a knowledge-base. Its specifically designed decoder, controlled by another neural network, can switch between generating a common word (e.g., is) and outputting a term (e.g., "John Malkovich" ) retrieved from knowledge-base with a certain probability. The model is trained on a dataset composed of real world question-answer pairs associated with triples in the knowledge-base, in which all components of the model are jointly tuned. Empirical study shows the proposed model can effectively capture the variation of language and generate right and natural answers to the questions by referring to the facts in the knowledge-base. The experiment on question answering demonstrates that the proposed model can outperform an embedding-based QA model as well as a neural dialogue model trained on the same data.

The learning task
We formalize generative question answering as a supervised learning task or more specifically a sequence-to-sequence learning task. A generative QA system takes a sequence of words as input question and generates another sequence of  words as output answer. In order to provide right answers, the system is connected with a knowledge-base (KB), which contains facts. During the process of answering, the system queries the KB, retrieves a set of candidate facts and generates a correct answer to the question using the right fact. The generated answer may contain two types of "words": one is common words for composing the answer (referred to as common word) and the other is specialized words in the KB denoting the answer (referred to as KB-word). To learn a model for the task, we assume that each training instance consists of a question-answer pair with the KB-word specified in the answer. In this paper, we only consider the case of simple factoid question, which means each questionanswer pair is associated with a single fact (i.e., one triple) of the KB. Without loss of generality, we focus on forward relation QA, where the question is on subject and predicate of the triple and the answer is from object. Tables 1 shows some examples of the training instances.

Data
To facilitate research on the task of generative QA, we create a new dataset by collecting data from the web. We first build a knowledge-base by mining from three Chinese encyclopedia web sites 1 . Specifically we extract entities and associated triples (subject, predicate, object) from the structured parts (e.g. HTML tables) of the web pages at the web sites. Then the extracted data is normalized and aggregated to form a knowledge-base. In this paper we sometimes refer to an item of a triple as a constituent of knowledge-base. Second, we collect question-answer pairs by extracting from two Chinese community QA sites 2 . Table 2 shows the statistics of the knowledge-base and QA-pairs.
We automatically and heuristically construct training and test data for generative QA by "grounding" the QA pairs with the triples in the knowledge-base. Specifically, for each QA pair, a list of candidate triples with the subject fields appearing in the question, is retrieved by using the Aho-Corasick string searching algorithm. The triples in the candidate list are then judged by a series of rules for relevance to the QA pair. The basic requirement for relevance is that the answer contains the object of the triple, which specifies the KB-word in the answer. Besides, we use additional scoring and filtering rules, attempting to find out the triple that truly matches the QA pair, if there is any. As the result of processing, 720K instances (tuples of question, answer, triple) are finally obtained with an estimated accuracy of 80%, i.e., 80% of instances have truly correct grounding. The data is publicly available online 3 . The data is further randomly partitioned into training dataset and test dataset by using triple as the partition key. In this way, all the questions in the test data are regarding to the unseen facts (triples) in the training data. Table 3 shows some statistics of the datasets. By comparing the numbers of triples in Table 2 and Table 3, we can see that a large portion of facts in the knowledge-base are not present in the training and test data, which demonstrates the necessity for the system to generalize to unseen facts.
The key challenge in learning of generative QA is to find a way to jointly train the neural network model in order to conduct understanding of question, generation of answer, and retrieval of relevant facts in KB, in a single and unified framework. To make things even harder, the data for training is noisy and informal, with typos, nonstandard expressions, and a wide range of language variations, which can block the system to acquire the right question-answer patterns.

The GENQA Model
Let Q = (x 1 , . . . , x T Q ) and Y = (y 1 , . . . , y T Y ) denote the natural language question and answer respectively. The knowledge-base is organized as a set of triples (subject, predicate, object), each denoted as τ = (τ s , τ p , τ o ). Inspired by the work on the encoder-decoder framework for neural machine translation [Cho et al., 2014b;Sutskever et al., 2014;Bahdanau et al., 2015] and neural natural language dialogue [Shang et al., 2015;Vinyals and Le, 2015;Serban et al., 2015], and the work on question answering with knowledge- , we propose an end-to-end neural network model for generative QA, named GENQA, which is illustrated in Figure 1.
The GENQA model consists of Interpreter, Enquirer, Answerer, and an external knowledge-base. Answerer further consists of Attention Model and Generator. Basically, Interpreter transforms the natural language question Q into a representation H Q and saves it in the short-term memory.
Enquirer takes H Q as input to interact with the knowledgebase in the long-term memory, retrieves relevant facts (triples) from the knowledge-base, and summarizes the result in a vector r Q . The Answerer feeds on the question representation H Q (through the Attention Model) as well as the vector r Q and generates an answer with Generator. We elaborate each component hereafter.

Interpreter
Given the question represented as word sequence Q = (x 1 , . . . , x T Q ), Interpreter encodes it to an array of vector representations. In our implementation, we adopt a bidirectional recurrent neural network (RNN) as in [Bahdanau et al., 2015], which processes the sequence forward and backward by using two independent RNNs (here we use gated recurrent unit (GRU) [Chung et al., 2014]). By concatenating the hidden states (denoted as (h 1 , · · · , h T Q )), the embeddings of words (denoted as (x 1 , · · · , x T Q )), and the onehot representations of words, we obtain an array of vectors This array of vectors is saved in the short-term memory, allowing for further processing by Enquirer and Answerer.

Enquirer
Enquirer "fetches" relevant facts from the knowledge-base with H Q (as illustrated in Figure 2). Enquirer first performs term-level matching (similar to the method of associating question-answer pairs with triples described in Section 2) to retrieve a list of relevant candidate triples, denoted as . K Q is the number of candidate triples, which is at most several hundreds in our data. After obtaining T Q , Enquirer then evaluates the relevance of each candidate triple with the question in the embedded space [Bordes et al., 2014b;Bordes et al., 2014a].
More specifically Enquirer calculates the relevance (matching) scores between the question and the K Q triples. For question Q, the scores are represented in a K Qdimensional vector r Q where the k th element of r Q is defined as the probability where S(Q, τ k ) denotes the matching score between question Q and triple τ k . The probability in r Q will be further taken into the probabilistic model in Answerer for generating an answer. Since r Q is of modest size, the number of triples involved in the matching score calculation is limited and the efficiency of the process is significantly enhanced. This is particularly true in the learning phase in which the parameters can be efficiently optimized from the supervision signals through backpropagation.
In this work, we provide two implementations for Enquirer to calculate the matching scores between question and triples.
Bilinear Model The first implementation simply takes the average of the word embedding vectors in H Q as the representation of the question (with the result denoted asx Q ) . For each triple τ in the knowledge-base, it takes the average of the embeddings of its subject and predicate as the representation of the triple (denoted as u τ ). Then we define the matching score asS where M is a matrix parameterizing the matching between the question and the triple.

CNN-based Matching Model
The second implementation employs a convolutional neural network (CNN) for modeling the matching score between the question and the triple, as in [Hu et al., 2014] and [Shen et al., 2014]. Specifically, the question is fed to a convolutional layer followed by a maxpooling layer, and summarized as a fixed-length vector, denoted asĥ Q . Thenĥ Q and u τ (again as the average of the embedding of the corresponding subject and predicate) are concatenated as input to a multi-layer perceptron (MLP) to produce their matching scorê ). For this model the parameters consist of those in the CNN and the MLP.

Answerer
Answerer uses an RNN to generate an answer based on the information of question saved in the short-term memory (represented as H Q ) and the relevant facts retrieved from the longterm memory (indexed by r Q ), as illustrated in Figure 3. The probability of generating the answer Y = (y 1 , y 2 , . . . , y T Y ) is defined as where θ represents the parameters in the GENQA model. The conditional probability in the RNN model (with hidden states s 1 , · · · , s T Y ) is specified by p(y t |y 1 , . . . , y t−1 , H Q , r Q ; θ) = p(y t |y t−1 , s t , H Q , r Q ; θ).
In generating the t th word y t in the answer, the probability is given by the following mixture model p(y t |y t−1 , s t ,H Q , r Q ; θ) = p(z t = 0|s t ; θ)p(y t |y t−1 , s t , H Q , z t = 0; θ)+ p(z t = 1|s t ; θ)p(y t |r Q , z t = 1; θ), which sums the contributions from the "language" part and the "knowledge" part, with the coefficient p(z t |s t ; θ) being realized by a logistic regression model with s t as input. Here the latent variable z t indicates whether the t th word is generated from a common vocabulary (for z t = 0) or a KB vocabulary (z t = 1). In this work, the KB vocabulary contains all the objects of the candidate triples associated with the particular question. For any word y that is only in the KB vocabulary, e.g., "2.29m", we have p(y t |y t−1 , s t , H Q , z t = 0; θ) = 0, while for y that does not appear in KB, e.g., "and", we have p(y t |r Q , z t = 1; θ) = 0. There are some words (e.g., "Shanghai") that appear in both common vocabulary and KB vocabulary, for which the probability contains nontrivial contributions from both bodies.
In generating common words, Answerer acts in the same way as the decoder of RNN in [Bahdanau et al., 2015] with information from H Q selected by the attention model. Specifically, the hidden state at t step is computed as s t = f s (y t−1 , s t−1 , c t ) and p(y t |y t−1 , s t , H Q , z t = 0; θ) = f y (y t−1 , s t , c t ), where c t is the context vector computed as a weighted sum of the hidden states stored in the short-term memory H Q .
In generating KB-words via p(y t |r Q , z t = 1; θ), Answerer simply employs the model p(y t = k|r Q , z t = 1; θ) = r Q k . The better a triple matched with the question, the more likely the object of the triple is selected.

Training
The parameters to be learned include the weights in the RNNs for Interpreter and Answerer, parameters in Enquirer (either the matrix M or the weights in the convolution layer and MLP), and the word-embeddings which are shared by the Interpreter RNN and the knowledge-base. GENQA, although essentially containing a retrieval operation, can be trained in an end-to-end fashion by maximizing the likelihood of observed data, since the mixture form of probability in Answerer provides a unified way to generate words from the common vocabulary and the KB vocabulary. Specifically, given the training data D = {(Q (i) , Y (i) , T (i) Q )}, the optimal parameters are obtained by minimizing the negative loglikelihood with regularization on all the parameters In practice the model is trained on machines with GPUs by using stochastic gradient-descent with mini-batch.

Implementation details
The texts in Chinese in the data are converted into sequences of words using the Jieba Chinese word segmentor. Since the word distributions on questions and answers are different, we use different vocabularies for them. Specifically for questions, we use the most frequent 30K words in the questions and all the words in the predicates of the triples, covering 98.4% of the word usages in the questions. For answers, we use the most frequent 30K words in the answers with a coverage of 97.3%. All the out-of-vocabulary words are replaced by a special token "UNK". The dimensions of the hidden states of encoder and decoder are both set to 500, and the dimension of the word-embedding is set to 300. Our models are trained on an NVIDIA Tesla K40 GPU using Theano [Bastien et al., 2012;Bergstra et al., 2010], with the mini-batch size of 80. The training of each model takes about two or three days.

Comparison Models
To our best knowledge there is no previous work on generative QA, we choose three baseline methods: a neural dialogue model, a retrieval-based QA model, and an embedding based QA model, respectively corresponding to the generative aspect and the KB-retrieval aspect of GENQA: Neural Responding Machine (NRM): NRM [Shang et al., 2015] is a neural network based generative model specially designed for short-text conversation. We train the NRM model with the question-answer pairs in the training data having the same vocabulary as GENQA. Since NRM does not access the knowledge-base during training and test, it actually remembers all the knowledge from the QA pairs in the model.
Retrieval-based QA: the knowledge-base is indexed by an information retrieval system (we use Apache Solr), in which each triple is deemed as a document. At the test phase, a question is used as the query and the top-retrieved triple is returned as the answer. Note that this method cannot generate natural language answers.
Embedding-based QA: as proposed by [Bordes et al., 2014a;Bordes et al., 2014b], the model is learnt from the question-triple pairs in the training data. The model learns to map questions and knowledge-base constituents into the same embedding space, where the similarity between question and triple is computed as the inner product of two embedding vectors. Different from the cross-entropy loss used in GENQA, this model uses a ranking loss function as follows: where τ and τ represent the positive and negative triples corresponding to the question. Similar to the retrieval-based QA, this model cannot generate natural language answers either.
Since we have two implementations of Enquirer of the GENQA model, we denote the one using the bilinear model as GENQA and the other using CNN and MLP as GENQA CNN .

Results
We evaluate the performance of the models in terms of 1) accuracy, i.e., the ratio of correctly answered questions, and 2) the fluency of answers. In order to ensure an accurate evaluation, we randomly select 300 questions from the test set, and manually remove the nearly duplicate cases and filter out the mistaken cases (e.g., non-factoid questions).
Accuracy Table 4 shows the accuracies of the models on the test set. NRM has the lowest accuracy, showing the lack of ability to accurately remember the answers and generalize to questions unseen in the training data. For example,

52%
to question "Which country does Xavi play for as a midfielder?" (Translated from Chinese), NRM gives the wrong answer "He plays for France" (Translated from Chinese), since the athlete actually plays for Spain. The retrieval-based method achieves a moderate accuracy, but like most string-matching methods it suffers from word mismatch between the question and the triples in the KB. The embedding-based QA model achieves higher accuracy on test set, thanks to its generalization ability from distributed representations. GENQA and GENQA CNN are both better than the competitors, showing that GENQA can further benefit from the end-to-end training of sequence-to-sequence learning. We conjecture that the task of generating the appropriate answers may help the learning of word-embeddings of questions. Among the two GENQA variants, GENQA CNN achieves the best accuracy, getting over half of the questions right. An explanation for that is that the convolution layer helps to capture salient features in matching. The experiment results demonstrate the ability of GENQA models to find the right answers from the KB even with regard to new facts. For example, to the example question mentioned above, GENQA gives the correct answer "He plays for Spain".
Fluency We make some empirical comparisons and find no significant differences between NRM and GENQA in terms of the fluency of answers. In general, all the three generationbased models (two of them are ours) yield correct sentences in most of the time. Figure 4 gives some examples of generated answers to the questions in the test set by our GENQA models, with the underlined words generated from KB. Clearly it can smoothly blend KB-words and common words in the sentences, thanks to the unified neural model that can learn to determine the right time to place a KB-word or a common word. We notice that most of the generated answers are short sentences, for which there are two possible reasons: 1) many answers to the factoid questions in the dataset are usually short, and 2) we select the answer for each question by beam-searching the sequence with maximum log-likelihood normalized by its length, which generally prefers a short answer. Examples 1 to 4 show the correctly generated answers, for which the model not only matches the right triples (and thus generates the right KB-words), but also generates suitable common words surrounding them. However, in some cases like examples 5 and 6 even the right triples are found, the surrounding common words are improper or incorrect from the knowledge-base point of view (e.g., in example 6 the author Figure 4: Examples of the generated answers by GENQA. "Jonathan Swift" is from Ireland rather than France). By investigating the correctly generated answers on test data, we find that roughly 8% of them having improper surrounding words. In some other cases, the model fails to match the correct triples with the questions, which produces completely wrong answers. For example 7, the question is about the release date of a movie, while the model finds its distributor and generates an answer incorrect both in terms of fact and language.

Related Work
Our work is inspired by recent work on neural machine translation and neural natural language dialogue. Most of neural translation models fall into the encoder-decoder framework [Cho et al., 2014b;Cho et al., 2014a;Sutskever et al., 2014], where the encoder summarizes the input sequence into a sequence of vector representations and the decoder generates the output sequence from the sequence of vector representations. Bahdanau et al. [2015] introduce the attention mechanism into the framework, and their system known as RNNsearch algorithm can jointly learn alignment and translation, and significantly improve the translation quality. This framework has also been used in natural language dialogue [Shang et al., 2015;Vinyals and Le, 2015;Serban et al., 2015;Wen et al., 2015b;Wen et al., 2015a], where the end-to-end neural dialogue model is trained on a large amount of conversation data. Although promising, neural dialogue models still have problems and limitations, e.g., the lack of mechanism to incorporate knowledge.
Our work is also inspired by recent work on knowledgebase embedding and question answering from knowledgebase. TransE [Bordes et al., 2013] is a method that learns the embedding vectors of the entities and the relations between entities by translating from subject entities to object entities. The model for question answering learns to embed questions and constituents in knowledge-base in the same low-dimensional space, where the similarity score between a question and a triple/subgraph is computed and the top ranked triples/subgraphs are selected as answers [Bordes et al., 2014b;Bordes et al., 2014a]. Yang et al. [Yang et al., 2014] propose a method that transforms natural questions into their corresponding logical forms using joint relational embeddings, and conducts question answering by leveraging semantic associations between lexical representations and KB properties in the latent space.
Memory Networks Sukhbaatar et al., 2015] is a recently proposed class of models that combine a large memory with a learning component that can read and write to the memory, in order to conduct reasoning for QA. Bordes et al. [2015] present an embedding-based question answering system developed under the framework of memory networks, which shows the perspective to involve more inference schemes in QA. Recently, Yin et al. [2015] propose an architecture, known as Neural Enquirer, to execute a natural language query on knowledge-base tables for question answering. It is a fully neural and end-to-end network that uses distributional representations of the query and the table, and realizes the execution of a compositional query through a series of differentiable operations.

Conclusion
In this paper we have proposed an end-to-end neural network model for generative question answering. The model is built on the encoder-decoder framework for sequence-tosequence learning, while equipped with the ability to query a knowledge-base. Empirical studies show the proposed model is capable of generating natural and right answers to the questions by referring to the facts in the knowledgebase. In the future, we plan to continue the work on question answering and dialogue, which includes: 1) iterative question answering: a QA system that can interact with the user to confirm/clarify/answer her questions in a multi-turn dialogue; 2) question answering from complex knowledgebase: a QA system that has the ability of querying a complex-structured knowledge-base such as a knowledge graph.