Question-type Driven Question Generation

Question generation is a challenging task which aims to ask a question based on an answer and relevant context. The existing works suffer from the mismatching between question type and answer, i.e. generating a question with type how while the answer is a personal name. We propose to automatically predict the question type based on the input answer and context. Then, the question type is fused into a seq2seq model to guide the question generation, so as to deal with the mismatching problem. We achieve significant improvement on the accuracy of question type prediction and finally obtain state-of-the-art results for question generation on both SQuAD and MARCO datasets.


Introduction
Question generation (QG) can be effectively applied to many fields, including question answering , dialogue system (Shum et al., 2018) and education. In this paper, we focus on the answer-aware QG, which is to generate a question according to the given sentence and the expected answer.
Recently, the neural-based approaches on QG have achieved remarkable success, by applying large-scale reading comprehension datasets and employing the encoder-decoder framework. Most of the existing works are based on the seq2seq network incorporating attention mechanism and copy mode, which are first applied in . Later, Song et al. (2018) leverage multi-perspective matching methods, and Sun et al. (2018) propose a position-aware model to put more emphasis on answer-surrounded context words. Both works are trying to enhance the relevance between the question and answer.  aggregate paragraph-level context to provide sufficient information for question generation. Another direction is to integrate question answering and question generation as dual tasks ). Beyond answer-aware QG, some systems try to generate questions from a text without answer as input (Du and Cardie, 2017;Subramanian et al., 2018).
Despite the progress achieved by the previous work, we found that the types of generated questions are often incorrect. According to experiments on SQuAD, one strong model  we replicate only obtains 57.6% accuracy in question type. As we know, question type is vital for question generation, since it determines the question pattern and guides the generating process. If the question type is incorrect, the remained generated sequence would drift far away.
Several works have addressed this issue. Sun et al. (2018) incorporated a question word generation mode to generate question word at each decoding step, which utilized the answer information by employing the encoder hidden states at the answer start position. However, their method did not consider the structure and lexical features of answer. Meanwhile, the way they utilize the question word is not as effective as ours. Hu et al. (2018) proposed a model to generate question based on the given question type and aspect, and their work verifies the effect of question word but fails in the conventional QG task which does not give a question type, since their experimental results show poor performance when trying to generate question types automatically. Our work solves this problem as Section 3.3 shows.  devised a type decoder which combines three type-specific generation distribution (including question type) with weighted sum. However, the results displayed in their paper show that questions in dialogue are far different from questions for reading comprehension, which indicates a gap  Figure 1: Structure of our unified model between two tasks. In this paper, we propose a unified model to predict the question type and to generate questions simultaneously. We conduct experiments on two reading comprehension datasets: SQuAD and MARCO, and obtain promising results. As for the auxiliary task, our unified model boosts the accuracy of question type prediction significantly, by 16.79% on SQuAD and 3.5% on MARCO. For question generation, our model achieves new state-of-the-art results on both datasets, with BLEU-4 16.31 on SQuAD and 21.59 on MARCO.

Model Description
The structure of our model is shown in Figure 1. A feature-rich encoder is used to encode the input sentence and corresponding answer that is a span of the sentence. Besides, the answer hidden states are used to predict the type of target question. This prediction will further be used to guide QG with a unified attention-based decoder.

Feature-rich Encoder
Follow , we exploit lexical features to enrich the encoder, where features are composed of POS tags, NER tags, and word case. We concatenate the word embedding e t , answer position embedding a t and lexical features embedding l t as input (( Then a bidirectional LSTM is used to produce a sequence of hidden states (h 1 , ..., h T ).

Question Type Prediction
Since different types of questions are various in syntax and semantics, it is essential to predict an accurate type for generating a reasonable question. According to the statistics on SQuAD, 78% of questions in the training set begin with the 7 most common used question words as Table 1 shows.
So we divide question types into 8 categories, including 7 question words and an additional type 'others'. It shows that the data distribution on different types is quite unbalanced, suggesting it is a hard task to predict the correct question type.
We use an unidirectional LSTM network to predict the expected question type. Assuming m+1, ..., m+a is the index of given answer span in the input sentence, based on the corresponding featurerich hidden states (h m+1 , ..., h m+a ), we calculate the question type hidden states as follow: where l m+j is the corresponding lexical features. Besides, to make full use of the feature-rich encoder hidden states, we take the last hidden state as the initial hidden state of LST M q . Then, the last output hidden state h q a are fed into a softmax layer to obtain the type distribution: E q is the loss of question type prediction, Q * w is the target type.

Unified Attention-based Decoder
The conventional attention-based decoder adopts a <BOS> token as the first input word at step 1, while we replace it with the predicted question word Q w to guide the generation process. At step i, we calculate the decoder hidden state as follow: Further, to fuse the type information in every step of question generation, our model takes h q a into consideration while calculating the context vector, i.e. the context vector is conditioned on where α it is calculated via attention mechanism (Bahdanau et al., 2014). Then, s i together with c i will be fed into a twolayer feed-forward network to produce the vocabulary distribution P vocab .
Following (See et al., 2017), the copy mode is used to duplicate words from the source via pointing: where P copy is the distribution of copy mode, p g ∈ [0, 1] is a gate to dynamically assign weights.
The loss at step i is the negative log-likelihood of the target word w * i . To obtain a combined training objective of two tasks, we add the loss of question type prediction into the loss of question generation to form a total loss function: 3 Experiment

Experiment Settings
Dataset Following the previous works, we conduct experiments on two datasets, SQuAD and MARCO. We use the data released by Zhou et al. The representations of answer position, POS tags, NER tags, and word case are randomly initialized as 32-dimensional vectors, respectively. The feature-rich encoder consists of 2 layers BiL-STM, and the hidden size of all encoders and decoder is set to 512. The cutoff length of the input sequences is set to 10. In testing, we used beam search with a beam size of 12. The development set is used to search the best checkpoint. In order to decrease the volatility of the training procedure, we then average the nearest 5 checkpoints to obtain a single averaged model.
Following the existing work, we use BLUE (Papineni et al., 2002) as the metrics for automatic evaluation, with BLUE-4 as the main metric.

Upper Bound Analysis
To study the effectiveness of the question type for QG, we make an upper bound analysis. The experimental results are shown in Table 2.
First, we feed the decoder with the original first word of questions, regardless whether the first word is a question word or not. As shown in Table 2, comparing with the baseline model, the performance gets 3.41 and 3.32 points increment on SQuAD and MARCO, respectively, which is a large margin.
Since the beginning words which are not question words compose a large vocabulary while the number of each word is few, it is irrational to train a classifier to predict all types of these beginning words. Therefore, we reduce the question type vocabulary to only question words. In details, if the start of a targeted question is a question word, the corresponding question word will replace the <BOS> to feed into decoder, otherwise we just use the original <BOS> without any replacement. This experiment still gains a lot, as shown in Table  2 with "given the quetion type".
The above experiments verify the magnitude of using the proper question type to guide the generation process, suggesting us a promising direction for QG.

Results and Analysis
Main Results The experimental results on two datasets are shown in Table 3. By incorporating the question type prediction, our model obtains obvious performance gain over the baseline model, with 1.42 points gain on SQuAD and 1.44 on MARCO. Comparing with previous methods, our model outperforms the existing best methods, achieving new state-of-the-art results on both datasets, with 16.31 on SQuAD and 21.59 on MARCO. Question Type Accuracy We evaluate different models in terms of Beginning Question Word Accuracy (BQWA). This metric measures the ratio of the generated questions that share the same beginning word with the references which begin with a question word. Table 4 displays the BQWA of two models on both datasets. It shows that our unified model brings significant performance gain in question type prediction. Further, in Figure 2 we show the accuracy of different question words on both datasets in detail. Our unified model outperforms the baseline for all question types on SQuAD, and all but two types on MARCO.

Model Analysis
We conduct experiments on dif-ferent variants of our model, as shown in Table 5. "w/o answer hidden state" takes (x m+1 , ..., x m+a ) as the input of type decoder instead of answer hidden states; "w/o question word replace <BOS>" simply use <BOS> as the first input word of decoder. Experiments on SQuAD verify the effectiveness of our model setting.

Case Study
To show the effect of question words prediction on question generation, Table 6 lists some typical examples.
In the first example, the baseline fails to recognize Len-shaped as an adjective, while the unifiedmodel succeeds by utilizing lexical features which are the input of question type prediction layer.
In the second example, the baseline assigns a location type for the generated question based on American and Israel, it fails to consider the whole answer span. Our unified model resolves it by encoding the answer hidden states sequence as a whole.
The third example shows another typical error, which fails to consider answer surrounding context. In this example the given answer only contains a number 66. By taking article 66 into account, we know the question should not be a nu- Context: In response to American aid to Israel, on October 16, 1973, OPEC raised the posted price of oil by 70%, to $5.11 a barrel. Reference: Why did the oil ministers agree to a cut in oil production? Baseline: Where did OPEC receive the price of oil by 70%? Unified-model: Why did OPEC raised the posted price of oil by 70%?
Context: Article 65 of the agreement banned cartels and article 66 made provisions for concentrations, or mergers, and the abuse of a dominant position by companies. Reference: Which article made provisions for concentrations or mergers and the abuse of a dominant position by companies? Baseline: How many provisions made provisions for concentrations? Unified-model: Which article made provisions for concentrations, or mergers?

Conclusion
In this paper, we discuss the challenge in question type prediction for question generation. We propose a unified model to predict the type and utilize it to guide the generation of question. Experiments on SQuAD and MARCO datasets verify the effectiveness of our model. We improve the accuracy of question type prediction by a large margin, and achieve new state-of-the-art results for question generation.