Neural Models for Key Phrase Extraction and Question Generation

We propose a two-stage neural model to tackle question generation from documents. First, our model estimates the probability that word sequences in a document are ones that a human would pick when selecting candidate answers by training a neural key-phrase extractor on the answers in a question-answering corpus. Predicted key phrases then act as target answers and condition a sequence-to-sequence question-generation model with a copy mechanism. Empirically, our key-phrase extraction model significantly outperforms an entity-tagging baseline and existing rule-based approaches. We further demonstrate that our question generation system formulates fluent, answerable questions from key phrases. This two-stage system could be used to augment or generate reading comprehension datasets, which may be leveraged to improve machine reading systems or in educational settings.


Introduction
Question answering and machine comprehension has gained increased interest in the past few years.An important contributing factor is the emergence of several large-scale QA datasets (Rajpurkar et al., 2016;Trischler et al., 2016;Nguyen et al., 2016;Joshi et al., 2017).However, the creation of these datasets is a labour-intensive and expensive process that usually comes at significant financial cost.Meanwhile, given the complexity of the problem space, even the largest QA dataset can still exhibit strong biases in many aspects including question and answer types, domain coverage, linguistic style, etc.
To address this limitation, we propose and evaluate neural models for automatic question-answer pair generation that involves two inter-related components: first, a system to identify candidate answer entities or events (key phrases) within a passage or document (Becker et al., 2012); second, a question generation module to construct questions about a given key phrases.As a financially more efficient and scalable alternative to the human curation of QA datasets, the resulting system can potentially accelerate further progress in the field.
Specifically, We formulate the key phrase extraction component as modeling the probability of potential answers conditioned on a given document, i.e., P (a|d).Inspired by successful work in question answering, we propose a sequence-tosequence model that generates a set of key-phrase boundaries.This model can flexibly select an arbitrary number of key phrases from a document.To teach it to assign high probability to humanselected answers, we train the model on largescale, crowd-sourced question-answering datasets.
We thus take a purely data-driven approach to understand the priors that humans have when selecting answer candidates, working from the premise that crowdworkers tend to select entities or events that interest them when formulating their own comprehension questions.If this premise is correct, then the growing collection of crowd-sourced question-answering datasets (Rajpurkar et al., 2016;Trischler et al., 2016) can be harnessed to learn models for key phrases of interest to human readers.
Given a set of extracted key phrases, we then approach the question generation component by modeling the conditional probability of a question given a document-answer pair, i.e., P (q|a, d).
To this end, we use a sequence-to-sequence model with attention (Bahdanau et al., 2014) and the pointer-softmax mechanism (Gulcehre et al., 2016).This component is also trained to maximize the likelihood of questions estimated on a QA dataset.When training this component, the model sees the ground truth answers from the dataset.
Empirically, our proposed model for key phrase extraction outperforms two baseline systems by a significant margin.We support these quantitative findings with qualitative examples of generated question-answer pairs given documents.

Key Phrase Extraction
An important aspect of question generation is identifying which elements of a given document are important or interesting to inquire about.Existing studies formulate key-phrase extraction in two steps.In the first, lexical features (e.g., partof-speech tags) are used to extract a key-phrase candidate list exhibiting certain types (Liu et al., 2011;Wang et al., 2016;Le et al., 2016;Yang et al., 2017).In the second, ranking models are often used to select a phrase from among the candidates.Medelyan et al. (2009); Lopez and Romary (2010) used bagged decision trees, while Lopez and Romary (2010) used a Multi-Layer Perceptron (MLP) and Support Vector Machine to perform binary classification on the candidates.Mihalcea and Tarau (2004); Wan and Xiao (2008); Le et al. (2016) scored key phrases using PageRank.Heilman and Smith (2010b) asked crowdworkers to rate the acceptability of computer-generated natural language questions as quiz questions, and Becker et al. (2012) solicited quality ratings of text chunks as potential gaps for Cloze-style questions.
These studies are closely related to our proposed work by the common goal of modeling the distribution of key phrases given a document.The major difference is that previous studies begin with a prescribed list of candidates, which might significantly bias the distribution estimate.In contrast, we adopt a dataset that was originally designed for question answering, where crowdworkers presumably tend to pick entities or events that interest them most.We postulate that the resulting distribution, learned directly from data, is more likely to reflect the true relevance of potential answer phrases.
Recently, Meng et al. (2017) proposed a generative model for key phrase prediction with an encoder-decoder framework that is able both to generate words from a vocabulary and point to words from the document.Their model achieved state-of-the-art results on multiple keywordextraction datasets.This model shares certain similarities with our key phrase extractor, i.e., using a single neural model to learn the probabilities that words are key phrases.Since their focus was on a hybrid abstractive-extractive task in contrast to the purely extractive task in this work, a direct comparison between works is difficult.Yang et al. (2017) used rule-based methods to extract potential answers from unlabeled text, and then generated questions given documents and extracted answers using a pre-trained question generation model.The model-generated questions were then combined with human-generated questions for training question answering models.Experiments showed that question answering models can benefit from the augmented data provided by their approach.

Question Generation
Automatic question generation systems are often used to alleviate (or eliminate) the burden of human generation of questions to assess reading comprehension (Mitkov and Ha, 2003;Kunichika et al., 2004).Various NLP techniques have been adopted in these systems to improve generation quality, including parsing (Heilman and Smith, 2010a;Mitkov and Ha, 2003), semantic role labeling (Lindberg et al., 2013), and the use of lexicographic resources like WordNet (Miller, 1995;Mitkov and Ha, 2003).However, the majority of the proposed methods resort to simple, rulebased techniques such as template-based slot filling (Lindberg et al., 2013;Chali and Golestanirad, 2016;Labutov et al., 2015) or syntactic transformation heuristics (Agarwal and Mannem, 2011;Ali et al., 2010) (e.g., subject-auxiliary inversion, (Heilman and Smith, 2010a)).These techniques generally do not capture the diversity of human generated questions.
To address this limitation, end-to-end-trainable neural models have recently been proposed for question generation in both vision (Mostafazadeh et al., 2016) and language.For the latter, Du et al. (2017) used a sequence-to-sequence model with an attention mechanism derived from the encoder states.Yuan et al. (2017) proposed a similar architecture but further improved model per-formance with policy gradient techniques.Wang et al. (2017) proposed a generative model that learns jointly to generate questions or answers from documents.
3 Model Description

Notations
Several components introduced in the following sections share the same model architecture for encoding text sequences.The common notations are explained in this section.
Unless otherwise specified, w refers to word tokens, e to word embeddings and h to the annotation vectors (also commonly referred to as hidden states) produced by an RNN.Superscripts specify the source of a word, e.g., d for documents, p for key phrases, a for (gold) answers, and q for questions.Subscripts index the position inside a sequence.For example, e d i is the embedding vector for the i-th token in the document.
A sequence of words are often encoded into annotation vectors (denoted h) by applying a bidirectional LSTM encoder to the corresponding sequence of word embeddings.For example, h q j = LSTM(e q j , h q j−1 ) is the annotation vector for the j-th word in a question.

Key Phrase Extraction
In this section, we describe a simple baseline as well as two proposed neural models for extracting key phrases (answers) from documents.

Entity Tagging Baseline
As our first baseline, we use spaCy1 to predict all entities in a document as relevant key phrases (call this model ENT).This is motivated by the fact that entities constitute the largest proportion (over 50%) of answers in the SQuAD dataset (Rajpurkar et al., 2016).Entities includes dates (September 1967), numeric entities (3, five), people (William Smith), locations (the British Isles), and other named concepts (Buddhism).

Neural Entity Selection
The baseline model above naïvely selects all entities as candidate answers.One pitfall is that it exhibits high recall at the expense of precision (Table 1), since not all entities lead to interesting questions.We first attempt to address this with a neural entity selection model (NES) that selects a subset of entities from a list of candidates provided by our ENT baseline.Our neural model takes as input a document (i.e., a sequence of words), D = (w d 1 , . . ., w d n d ), and a list of n e entities as a sequence of (start, end) locations within the document, E = ((e start 1 , e end 1 ), . . ., (e start ne , e end ne )).The model is then trained on the binary classification task of predicting whether an entity overlaps with any of the human-provided answers.Specifically, we maximize ne i log(P (e i |D)).We parameterize P (e i |D) using a three-layer multilayer perceptron (MLP) that takes as input the concatenation of three vectors h ).
During inference, we select the top k entities with highest likelihood under our model.We use k = 6 in our experiments as determined by hyperparameter search.

Pointer Networks
While a significant fraction of answers in QA datasets like SQuAD are entities, entities alone may be insufficient for detecting different aspects of a document.Many documents are entity-less, and entity taggers may fail to recognize some entities.To this end, we build a neural model that is trained from scratch to extract all humanselected answer phrases in a particular document.We parameterize this model as a pointer network (Vinyals et al., 2015) trained to point sequentially to the start and end locations of all labeled answers in a document.An autoregressive decoder LSTM augmented with an attention mechanism is then trained to point (attend) to all of the start and end locations of answers from left to right, conditioned on the annotation vectors (extracted in the same fashion as in the NES model), via an attention mechanism.We add a special termination token to the document and train the decoder to attend to it once it has generated all key phrases.This enables the model to extract variable numbers of key phrases depending on the input document.This is in contrast to the work of Meng et al. (2017), where a fixed number of key phrases is generated per document.
A pointer network is an extension of sequenceto-sequence models (Sutskever et al., 2014), where the target sequence consists of positions in the source sequence.An autoregressive decoder RNN is trained to attend to these positions in the input conditioned on an encoding of the input produced by an encoder RNN.We denote the decoder's annotation vectors as , where n a is the number of answer key phrases, h p 1 and h p 2 correspond to the start and end annotation vectors for the first answer key phrase, and so on.We parameterize P (w d i = start|h p 1 . . .h p j , h d ) and P (w d i = end|h p 1 . . .h p j , h d ) using the general attention mechanism (Luong et al., 2015) between the decoder and encoder annotation vectors, where W 1 is a learned parameter matrix.The inputs at each step of the decoder are words from the document that correspond to the start and end locations pointed to by the decoder.
During inference, we employ a decoding strategy that greedily picks the best location from the softmax vector at every step, then post process results to remove duplicate key phrases.Since the output sequence is relatively short, we observed similar performances when using greedy decoding and beam search.
We also experimented with a BIO tagging model using an LSTM-CRF (Lample et al., 2016) but were unable to make the model predict anything except "O" for every token.

Question Generation
The question generation model adopts a sequenceto-sequence framework (Sutskever et al., 2014) with an attention mechanism (Bahdanau et al., 2014) and a pointer-softmax decoder (Gulcehre et al., 2016).We make use of the pointer-softmax mechanism since it lets us take advantage of the inherent nature of RC datasets re-using words in the document when framing questions.Our setup for this module is identical to (Yuan et al., 2017).It takes a document w d 1:n d and an answer w a 1:na as input, and outputs a question ŵq 1:nq .An input word w {d,a} i is represented by concatenating its word embedding e i with character-level embedding e ch i .Each character in the alphabet receives an embedding vector, and e ch i is the final state of a bi-LSTM running over the embedding vectors corresponding to the character sequence of the word.
To leverage the extractive nature of answers in SQuAD, we encode an answer using the document annotation vectors at the answer-word positions.Specifically, if an answer phrase w a 1:n occupies the document span w d a 1 :an , we first encode the corresponding document annotation vectors with a condition aggregation BiLSTM into h 1:n .The concatenation of the final state h n with the answer annotation vector h a n as the answer representation.The RNN decoder employs a pointer-softmax module (Gulcehre et al., 2016).At each step of the generation process, the decoder decides adaptively whether to (a) generate from the decoder vocabulary or (b) point to a word in the source sequence (the document) and copy over.The pointer-softmax decoder thus has two components -a pointer attention mechanism and a generative decoder.
The subsequent mathematical notation deviates from the previous notation slightly, we use (t) as the superscript.In the pointing decoder, recurrence is implemented with two cascading LSTM cells c 1 and c 2 : where s 1 and s 2 are the recurrent states, y (t−1) is the embedding of decoder output from the previous time step, and v (t) is the context vector, which is the sum of the document annotations h d i weighted by the document attention α (t) i (Equation (3)): At each time step t, the pointing decoder computes a distribution α (t) over the document word positions (i.e., a document attention, Bahdanau et al. 2014).Each element is defined as: where f is a two-layer MLP with tanh and softmax activation, respectively.The generative decoder, on the other hand, defines a distribution over a prescribed decoder vocabulary with a two-layer MLP g: Pointer-softmax is implemented by interpolating the generative and the pointing distributions: where s (t) is a switch scalar computed at each time step by a three-layer MLP h: The first two layers of h use tanh activation with highway connections, and the final layer uses sigmoid activation.2 4 Experiments and Results

Datasets
We conduct our experiments on the SQuAD (Rajpurkar et al., 2016) and NewsQA (Trischler et al., 2016) datasets.These are machine comprehension corpora consisting of over 100k crowd-sourced question-answer pairs.SQuAD contains 536 paragraphs from Wikipedia while NewsQA was created on 12,744 news articles.Simple preprocessing is performed, including lower-casing and word tokenization using NLTK.Since the test split of SQuAD is hidden from the public, we use 5,158 question-answer pairs (self-contained in 23 Wikipedia articles) from the training set for development, and use the official development data to report test results.

Implementation Details
We train all models by stochastic gradient descent, with a minibatch size of 32, using the ADAM optimizer.

Key Phrase Extraction
Key phrase extraction models use pretrained, 300dimensional word embeddings generated using a word2vec extension (Ling et al., 2015) and the English Gigaword 5 corpus.We used bidirectional LSTMs of 256 dimensions (128 forward and backward) to encode the document and an LSTM of 256 dimensions as our decoder in the pointer network.A dropout rate of 0.5 was used at the output of every layer in the network.

Question Generation
The question decoder uses a vocabulary of the 2000 most frequent words in the training data (questions only).This limited vocabulary is possible because the question generator may copy over out-of-vocabulary words from the document with its Pointer-Softmax mechanism.The decoder embedding matrix is initialized with 300-dimensional GloVe vectors (Pennington et al., 2014), and dimensionality of the character representations is 32.The number of hidden units is 384 for both the encoder and decoder RNN cells.Dropout is applied at a rate of 0.3 to all embedding layers as well as between the hidden states in the encoder/decoder RNNs across time steps.

Quantitative Evaluation of Key Phrase Extraction
Since each key phrase is itself a multi-word unit, we believe that a naïve, word-level F1 that considers an entire key phrase as a single unit is not well suited to this evaluation.We therefore propose an extension of the SQuAD F1 metric (for a single answer span) to multiple spans within a document, which we call the multi-span F1 score.This metric is calculated as follows.Given the predicted phrase êi and a gold phrase e j , we first construct a pairwise, token-level F 1 score matrix of elements f i,j between the two phrases êi and e j .Max-pooling along the gold-label axis essentially assesses the precision of each prediction, with partial matches accounted for by the pairwise F1 (identical to evaluation of a single answer in SQuAD) in the cells: p i = max j (f i,j ).Analogously, the recall for label e j can be computed by max-pooling along the prediction axis: r j = max i (f i,j ).We define the multi-span F1 score using the mean precision p = avg(p i ) and recall r = avg(r j ): .
Table 2: Qualitative examples of detected key phrases and generated questions.
Doc. inflammation is one of the first responses of the immune system to infection .the symptoms of inflammation are redness , swelling , heat , and pain , which are caused by increased blood flow into tissue .inflammation is produced by eicosanoids and cytokines , which are released by injured or infected cells .eicosanoids include prostaglandins that produce fever and the dilation of blood vessels associated with inflammation , and leukotrienes that attract certain white blood cells ( leukocytes ) . . .enthusiastic teachers are particularly good at creating beneficial relations with their students .their ability to create effective learning environments that foster student achievement depends on the kind of relationship they build with their students .useful teacher-to-student interactions are crucial in linking academic success with personal achievement .here , personal success is a student 's internal goal of improving himself , whereas academic success includes the goals he receives from his superior .a teacher must guide his student in aligning his personal goals with his academic goals .students who receive this positive influence show stronger self-confidenche and greater personal and academic success than those without these teacher interactions .Note that existing evaluations (e.g., that of Meng et al. (2017)) can be seen as the above computation performed on the matrix of exact match scores between predicted and gold key phrases.By using token-level F1 scores between phrase pairs, we allow fuzzy matches.

H&S
Our evaluation of key phrase extraction systems by this metric is presented in Table 1.We compare answer phrases extracted by the method of Heilman and Smith (2010a) (henceforth refered to as H&S), 3 our baseline entity tagger, the neural entity selection module, and the pointer network.As expected, the entity tagging baseline achieves the best recall, likely by over-generating candidate answers.The NES model, on the other hand, exhibits much better precision and consequently outperforms the entity tagging baseline significantly in F1.This trend persists when comparing the NES model and the pointer network.The H&S model exhibits high recall but lacks precision, similar to the baseline entity tagger.This is not surprising since that model is not trained on SQuAD's answer-phrase distribution.

Qualitative Evaluation of Key Phrase Extraction
Qualitatively, we observe that the entity-based models have a strong bias for numeric types, which often fail to capture interesting information in a document.We also notice that entitybased systems tend to select the central topical entity as answer, which does not match the distribution of answers typically selected by humans.For example, given a Wikipedia article on Kenya stating that agriculture is the second largest contributor to kenya 's gross domestic product (gdp), entity-based systems propose kenya as an answer phrase.This leads to the (low-quality) question what country is nigeria's second largest contributor to?4 Given the same document, the pointer model picked agriculture as the answer and asked what is the second largest contributor to kenya 's gross domestic product ?

Quantitative Evaluation of QA pairs
We can quantitatively evaluate our question generation module by conditioning it on gold answers from the SQuAD development set.We can then use standard automatic evaluation metrics for generative models of text such as BLEU.Our question generation model evaluated in such a manner yields 10.4 BLEU 4 .However, there can exist a many possible ways to formulate a question given the same answer.BLEU thus becomes a less desirable metric by penalizing any generation that does not closely match (lexically) the reference question.To address this issue, we propose to evaluate a generated question by employing a pre-trained QA model.Specifically, suppose question q is generated from document d and answer a, and the pre-trained QA model outputs answer â given the input d and q.If the QA model is assumed to be able to answer the gold question q with the gold answer a, then the F1 score between a and â may serve as a proxy to the semantic equivalence between q and qregardless of the amount of word/n-gram overlap between q and q.
Quantitatively, a match-LSTM model (Wang and Jiang, 2016) pre-trained on gold squad question/answer pairs achieves an F1 score of 72.4% on our generated questions in comparison to 73.8% on the SQuAD dev set.
In addition to the automatic evaluation metrics, we also undertook a human evaluation of generated questions and answers.

Qualitative Evaluation of QA pairs
We present several answer-extraction and question-generation examples in Table 2.Each example contains a document and three corresponding QA pairs, generated respectively by H&S, by our two-stage framework, and by the original SQuAD crowdworkers.
We now discuss the relative qualities of QA pairs from each synthetic method.
H&S Key phrases selected by the H&S model are structurally distinct from the PtrNet and human-generated answers.For example, they may start with prepositions, such as of, by, and to, or consist of very long phrases like that student motivation and attitudes towards school are closely linked to student-teacher relationships.As seen in Figure 1, these key phrases may also contain vague phrases such as "this theory", "some studies", "a person", etc., which renders them less natural for question generation.The H&S question generator appears to produce a few ungrammatical sentences, e.g., the first time -what was the yuan dynasty that non-native chinese people ruled all of china ?
Our system Since our key phrase extractor was trained on SQuAD, the selected key phrases more closely resemble gold SQuAD answers.However, sometimes the generated questions do not target the extracted answers, eg, eicosanoids and cytokines -what are bacteria produced by ?(first document in Table 2).Interestingly, our model is sometimes able to resolve coreferent entities.For instance, to generate the mongol empire --the yuan dynasty is considered to be the continuation of what ? the model must resolve the pronoun it to yuan dynasty in it is generally considered to be the continuation of the mongol empire (third document in Table 2).

Human Evaluation Studies
We carried out human evaluations on the question generation module in isolation as well as in conjunction with the key phrase extraction module.
Evaluating the ability of the Question Generation Module to transfer to new settings We asked crowdworkers part of an internal evaluation system to evaluate two different aspects of questions generated by our module -fluency and correctness.Our system was provided Internet cles and candidate answers selected from an internal search engine thereby evaluating the model's ability to generalize from simple RC datasets to the real world.For fluency evaluations, they were asked whether the generated questions sounded natural (ignoring semantics) with scores of 0/1/2 corresponding to "No", "Somewhat" and "Yes".17.5% were labeled 0, 22.7% were labeled 1 and 59.8% were labeled 2. For correctness evaluations, annotators were asked if the given answer was the correct answer for the given question.64.4% of questions were labeled incorrect, leav-ing 35.6% labeled as correct.This particular evaluation differs slightly from others with regard to the module used (it was trained a combination of SQuAD + NewsQA + TriviaQA (Joshi et al., 2017)).Also the documents and answers used provided via an internal tool.1,302 annotations were collected.

Comparison to human generated questions
We present annotators with documents from SQuAD's official development set and two sets of question-answer pairs, one from our model (machine generated) and the other from SQuAD (human generated).Annotators are then asked to identify which question-answer pair is machine generated.The order in which the pairs appear is randomized across examples.Annotators are free to use any criterion to make a distinction, such as poor grammar, the answer phrase not correctly answering the generated question, unnatural answer phrases, etc.
We presented 14 annotators with a total of 740 documents, each with 2 corresponding QA pairs.Annotators identified the machine generated pairs 77.8% of the time with a standard deviation of 8.34%.
Implict comparison to H&S To compare our system to existing methods (H&S), we orchestrate an implict comparison grounded in human generated QA pairs from SQuAD.We present human annotators with a document and two QA pairsone that comes from the true development set and the other from either our system or H&S, at random.Annotators are not told that there are two different models generating QA pairs.As above, annotators are asked to identify which QA pair is human generated and which is synthetic.
We presented a single annotator with 100 documents, each with two QA pairs.For 45 documents, the synthetic QA pair came from from our model; for the remaining 55, the synthetic pair was from H&S.The annotator distinguished correctly between our system's output and the humangenerated pair in 30 cases (66.7%), and did so in 45 cases (81.8%) for H&S.This experiment suggests that our system's generated QA pairs are less distinguishable from human QA pairs.
Comparison to H&S In a more direct evaluation, we present annotators with documents from the SQuAD development set along with one QA pair generated by the H&S model and one generated by ours.We then ask annotators which QA pair they prefer.
We presented the same single annotator with 200 such examples.In 107 cases (53.5%), the annotator preferred the pair generated by our model.This suggests that, without human generated QA pairs for comparison, the annotator considers the two models' outputs to be roughly equal in quality.

Conclusion
We propose a two-stage framework to tackle the problem of question generation from documents.First, we use a question answering corpus to train a neural model to estimate the distribution of key phrases that are likely to be picked by humans to ask questions about.We present two neural models, one that ranks entities proposed by an entity tagging system, and another that points to keyphrase start and end boundaries with a pointer network.When compared to an entity tagging baseline, the proposed models exhibit significantly better results.
We adopt a sequence-to-sequence model to generate questions conditioned on the key phrases selected in the framework's first stage.Our question generator is inspired by an attention-based translation model, and uses the pointer-softmax mech-anism to dynamically switch between copying a word from the document and generating a word from a vocabulary.Qualitative examples show that the generated questions exhibit both syntactic fluency and semantic relevance to the conditioning documents and answers, and appear useful for assessing reading comprehension in educational settings.In future work we will investigate fine-tuning the two-stage framework end to end.Another interesting direction is to explore abstractive key-phrase extraction.
cytokines -who is inflammation produced by ? of the first responses of the immune system to infectionwhat is inflammation one of ?Q-A PtrNet leukotrienes -what can attract certain white blood cells ?eicosanoids and cytokines -what are bacteria produced by ?Q-A Gold SQuAD inflamation -what is one of the first responses the immune system has to infection ?eicosanoids and cytokines -what compounds are released by injured or infected cells , triggering inflammation ?Doc. research shows that student motivation and attitudes towards school are closely linked to student-teacher relationships .
research -what shows that student motivation and attitudes towards school are closely linked to student-teacher relationships ?useful teacher-to-student interactions -what are crucial in linking academic success with personal achievement ? to student-teacher relationships -what does research show that student motivation and attitudes towards school are closely linked to ? that student motivation and attitudes towards school are closely linked to student-teacher relationships -what does research show to ?Q-A PtrNet student-teacher relationships -what are the student motivation and attitudes towards school closely linked to ?enthusiastic teachers -who are particularly good at creating beneficial relations with their students ?teacher-to-student interactions -what is crucial in linking academic success with personal achievement ? a teacher -who must guide his student in aligning his personal goals ?Q-A Gold SQuAD student-teacher relationships -'what is student motivation about school linked to ?beneficial -what type of relationships do enthusiastic teachers cause ?aligning his personal goals with his academic goals .-what should a teacher guide a student in ?student motivation and attitudes towards school -what is strongly linked to good student-teacher relationships ?Doc. the yuan dynasty was the first time that non-native chinese people ruled all of china . in the historiography of mongolia , it is generally considered to be the continuation of the mongol empire .mongols are widely known to worship the eternal heaven . . .Q-A H&S the first time -what was the yuan dynasty that non-native chinese people ruled all of china ? the yuan dynasty -what was the first time that non-native chinese people ruled all of china ?Q-A PtrNet the mongol empire -the yuan dynasty is considered to be the continuation of what ?worship the eternal heaven -what are mongols widely known to do in historiography of mongolia ?Q-A Gold SQuAD non-native chinese people -the yuan was the first time all of china was ruled by whom ? the eternal heaven -what did mongols worship ?Doc. on july 31 , 1995 , the walt disney company announced an agreement to merge with capital cities/abc for $ 19 billion . . . . . in 1998 , abc premiered the aaron sorkin-created sitcom sports night , centering on the travails of the staff of a sportscenter-style sports news program ; despite earning critical praise and multiple emmy awards , the series was cancelled in 2000 after two seasons .Q-A H&S an agreement to merge with capital cities/abc for $19 billion -what did the walt disney company announce on july 31 , 1995 ? the walt disney company -what announced an agreement to merge with capital cities/abc for $19 billion on july 31 , 1995 ?Q-A PtrNet 2000 -in what year was the aaron sorkin-created sitcom sports night cancelled ?walt disney company -who announced an agreement to merge with capital cities/abc for $ 19 billion ?Q-A Gold SQuAD july 31 , 1995 -when was the disney and abc merger first announced ?sports night -what aaron sorkin created show did abc debut in 1998 ?

Figure 1 :
Figure 1: A comparison of key phrase extraction methods.Red phrases are extracted by the pointer network, violet by H&S, green by the baseline, brown correspond to squad gold answers and cyan indicates an overlap between the pointer model and squad gold questions.The last paragraph is an exception where lyndon b. johnson and april 20 are extracted by H&S as well as the baseline model.
d n d ; h d avg ; h e i .Here, h d avg and h d n d are the average and the final state of the document annotation vectors, respectively, and h e i is the average of the annotation vectors corresponding to the i-th entity (i.e.,

Table 1 :
Model evaluation on key phrase extractionValidation Test Models F 1 M S Prec.Rec.F 1 M S Prec.Rec.