Evidence Sentence Extraction for Machine Reading Comprehension

Remarkable success has been achieved in the last few years on some limited machine reading comprehension (MRC) tasks. However, it is still difficult to interpret the predictions of existing MRC models. In this paper, we focus on extracting evidence sentences that can explain or support the answers of multiple-choice MRC tasks, where the majority of answer options cannot be directly extracted from reference documents. Due to the lack of ground truth evidence sentence labels in most cases, we apply distant supervision to generate imperfect labels and then use them to train an evidence sentence extractor. To denoise the noisy labels, we apply a recently proposed deep probabilistic logic learning framework to incorporate both sentence-level and cross-sentence linguistic indicators for indirect supervision. We feed the extracted evidence sentences into existing MRC models and evaluate the end-to-end performance on three challenging multiple-choice MRC datasets: MultiRC, RACE, and DREAM, achieving comparable or better performance than the same models that take as input the full reference document. To the best of our knowledge, this is the first work extracting evidence sentences for multiple-choice MRC.


Introduction
Recently, there have been increased interests in machine reading comprehension (MRC). In this work, we mainly focus on multiple-choice MRC (Richardson et al., 2013;Mostafazadeh et al., 2016;Ostermann et al., 2018): given a document and a question, the task aims to select the correct answer option(s) from a small number of answer options associated with this ques-tion. Compared to extractive and abstractive MRC tasks (e.g., (Rajpurkar et al., 2016;Kočiskỳ et al., 2018;Reddy et al., 2019)) where most questions can be answered using spans from the reference documents, the majority of answer options cannot be directly extracted from the given texts.
Existing multiple-choice MRC models (Wang et al., 2018b;Radford et al., 2018) take as input the entire reference document and seldom offer any explanation, making interpreting their predictions extremely difficult. It is a natural choice for human readers to use sentences from a given text to explain why they select a certain answer option in reading tests (Bax, 2013). In this paper, as a preliminary attempt, we focus on exacting evidence sentences that entail or support a question-answer pair from the given reference document.
For extractive MRC tasks, information retrieval techniques can be very strong baselines to extract sentences that contain questions and their answers when questions provide sufficient information, and most questions are factoid and answerable from the content of a single sentence (Lin et al., 2018;Min et al., 2018). However, we face unique challenges to extract evidence sentences for multiple-choice MRC tasks. The correct answer options of a significant number of questions (e.g., 87% questions in RACE (Lai et al., 2017;Sun et al., 2019)) are not extractive, which may require advanced reading skills such as inference over multiple sentences and utilization of prior knowledge (Lai et al., 2017;Khashabi et al., 2018;Ostermann et al., 2018). Besides, the existence of misleading wrong answer options also dramatically increases the difficulty of evidence sentence extraction, especially when a question provides insufficient information. For example, in Figure 1, given the reference document and question "Which of the following statements is true according to the passage?", almost all the tokens in the wrong answer option B "In 1782, Harvard began to teach German." appear in the document (i.e., sentence S 9 and S 11 ) while the question gives little useful information for locating answers. Furthermore, we notice that even humans sometimes have difficulty in finding pieces of evidence when the relationship between a question and its correct answer option is implicitly indicated in the document (e.g., "What is the main idea of this passage?"). Considering these challenges, we argue that extracting evidence sentences for multiplechoice MRC is at least as difficult as that for extractive MRC or factoid question answering.
Given a question, its associated answer options, and a reference document, we propose a method to extract sentences that can potentially support or explain the (question, correct answer option) pair from the reference document. Due to the lack of ground truth evidence sentences in most multiplechoice MRC tasks, inspired by distant supervision, we first extract silver standard evidence sentences based on the lexical features of a question and its correct answer option (Section 2.2), then we use these noisy labels to train an evidence sentence extractor (Section 2.1). To denoise imperfect labels, we also manually design sentence-level and cross-sentence linguistic indicators such as "adjacent sentences tend to have the same label" and accommodate all the linguistic indicators with a recently proposed deep probabilistic logic learning framework (Wang and Poon, 2018) for indirect supervision (Section 2.3).
Previous extractive MRC and question answering studies Lin et al., 2018) indicate that a model should be able to achieve comparable end-to-end performance if it can accurately predict the evidence sentence(s). Inspired by the observation, to indirectly evaluate the quality of the extracted evidence sentences, we only keep the selected sentences as the new reference document for each instance and evaluate the performance of a machine reader (Wang et al., 2018b;Radford et al., 2018) on three challenging multiple-choice MRC datasets: MultiRC (Khashabi et al., 2018), RACE (Lai et al., 2017), and DREAM (Sun et al., 2019). Experimental results show that we can achieve comparable or better performance than the same reader that considers the full context. The comparison between ground truth evidence sentences and automatically selected sentences indicates that there is still room for improvement.
Our primary contributions are as follows: 1) to the best of our knowledge, this is the first work to extract evidence sentences for multiple-choice MRC; 2) we show that it may be a promising direction to leverage various sources of linguistic knowledge for denoising noisy evidence sentence labels. We hope our attempts and observations can encourage the research community to develop more explainable MRC models that simultaneously provide predictions and textual evidence.

Reference Document
: Started in 1636, Harvard University is the oldest of all the colleges and universities in the United States, followed by Yale, Princeton, Columbia... : In the early years, these schools were nearly the same. : Only young men went to college. : All the students studied the same subjects, and everyone learned Latin and Greek………. : In 1782, Harvard started a medical school for young men who wanted to become doctors………. : In 1825, besides Latin and Greek, Harvard began to teach modern languages, such as French and German. : Soon it began to teach American history. : As knowledge increased, Harvard and other colleges began to teach many new subjects. Question: Which of the following statements is true according to the passage? Options: A. in the early years, everyone can go to colleges.
B. in 1782, Harvard began to teach German. C. in the early years, different colleges majored in different fields. D. more and more courses were taught in college with the improvement of knowledge. We will present our evidence sentence extractor (Section 2.1) trained on the noisy training data generated by distant supervision (Section 2.2) and denoised by an existing deep probabilistic logic framework that incorporates different kinds of linguistic indicators (Section 2.3). The extractor is followed by an independent neural reader for evaluation. See an overview in Figure 1.

Evidence Sentence Extractor
We use a multi-layer multi-head transformer (Vaswani et al., 2017) to extract evidence sentences. Let W w and W p be the word (subword) and position embeddings, respectively. Let M denote the total number of layers in the transformer. Then, the m-th layer hidden state h m of a token is given by: where TB stands for the Transformer Block, which is a standard module that contains MLP, residual connections  and LayerNorm (Ba et al., 2016). Compared to classical transformers, pre-trained transformers such as GPT (Radford et al., 2018) and BERT (Devlin et al., 2018) capture rich world and linguistic knowledge from large-scale external corpora, and significant improvements are obtained by fine-tuning these pre-trained models on a variety of downstream tasks. We follow this promising direction by fine-tuning GPT (Radford et al., 2018) on a target task. Note that the pretrained transformer in our pipeline can also be easily replaced with other pre-trained models, which however is not the focus of this paper.
We use (X, Y ) to denote all training data, (X i , Y i ) to denote each instance, where X i is a token sequence, namely, X i = {X 1 i , . . . , X t i } where t equals to the sequence length. For evidence sentence extraction, X i contains one sentence in a document, a question, and all answer options associated with the question. Y i indicates the probability that sentence in X i is selected as an evidence sentence for this question, and where N equals to the total number of sentences in a document. GPT takes as input X i and produces the final hidden state h M i of the last token in X i , which is further fed into a linear layer followed by a softmax layer to generate the probability: where W y is weight matrix of the output layer. We use Kullback-Leibler divergence loss KL(Y ||P ) as the training criterion. We first apply distant supervision to generate noisy evidence sentence labels (Section 2.2). To denoise the labels, during the training stage, we use deep probabilistic logic learning (DPL)a general framework for combining indirect supervision strategies by composing probabilistic logic with deep learning (Wang and Poon, 2018). Here we consider both sentence-level and crosssentence linguistic indicators as indirect supervision strategies (Section 2.3).
As shown in Figure 2, during training, our evidence sentence extractor contains two components: a probabilistic graph containing various sources of indirect supervision used as a supervision module and a fine-tuned GPT used as a prediction module. The two components are connected via a set of latent variables indicating whether each sentence is an evidence sentence or not. We update the model by alternatively optimizing GPT and the probabilistic graph so that they reach an agreement on latent variables. After training, only the fine-tuned GPT is kept to make predictions for a new instance during testing. We provide more details in Appendix A and refer readers to Wang and Poon (2018) for how to apply DPL as a tool in a downstream task such as relation extraction.

Silver Standard Evidence Generation
Given correct answer options, we use a distant supervision method to generate the silver standard evidence sentences.
Inspired by Integer Linear Programming models (ILP) for summarization (Berg-Kirkpatrick et al., 2011;Boudin et al., 2015), we model evidence sentence extraction as a maximum coverage problem and define the value of a selected sentence set as the sum of the weights for the unique words it contains. Formally, let v i denote the weight of word i, v i = 1 if word i appears in the correct answer option, v i = 0.1 if it appears in the question but not in the correct answer option, and v i = 0 otherwise. 1 We use binary variables c i and s j to indicate the presence of word i and sentence j in the selected sentence set, respectively. Occ i,j is a binary variable indicating the occurrence of word i in sentence j, l j denotes the length of sentence j, and L is the predefined maximum number of selected sentences. We formulate the ILP problem as:

Linguistic Indicators for Indirect Supervision
To denoise the imperfect labels generated by distant supervision (Section 2.2), as a preliminary attempt, we manually design a small number of sentence-level and cross-sentence linguistic indicators incorporated in DPL for indirect supervision. We briefly introduce them as follows and detail all indicators in Appendix A.3 and implementation details in Section 3.2. We assume that a sentence is more likely to be an evidence sentence if the sentence and the question have similar meanings, lengths, coherent entity types, same sentiment polarity, or the sentence is true (i.e., entailment) given the question. We assume that a good evidence sentence should be neither too long nor too short (i.e., 5 ≤ # of tokens in sentence ≤ 40) considering informativeness and conciseness, and an evidence sentence is more likely to lead to the prediction of the correct answer option (referred as "reward"), which is motivated by our experiments that machine readers take as input the silver (or gold) standard evidence sentences achieve the best performance except for human performance on three multiple-choice machine reading comprehension datasets (Table 2, Table 3, and Table 4 in Section 3). We rely on both lexical features (e.g., lengths and entity types) and semantic features based on pre-trained word/paraphrase embeddings and external knowledge graphs to measure the similarity of meanings. We use existing models or resources for reward calculation, sentiment analysis and natural language inference.
For cross-sentence indicators, we consider that the same set of evidence sentences are less likely to support multiple questions and two evidence sentences that support the same question should be within a certain distance (i.e., evidence sentences for the same question should be within window size 8 (in sentences)). We also assume that adjacent sentences tend to have the same label. We will have more discussions about these assumptions in the data analysis (Section 3.6).

Datasets
We use the following three latest multiple-choice machine reading comprehension datasets for evaluation. We show data statistics in Table 1.
MultiRC (Khashabi et al., 2018): MultiRC is a dataset in which questions can only be answered by considering information from multiple sentences. A question may have multiple correct answer options. Reference documents come from seven different domains such as elementary school science and travel guides. For each document, questions and their associated answer options are generated and verified by turkers.
RACE (Lai et al., 2017): RACE is a dataset collected from English language exams designed for middle (RACE-Middle) and high school (RACE-High) students in China, carefully designed by English instructors. The proportion of questions that requires reasoning is 59.2%.
DREAM (Sun et al., 2019): DREAM is a dataset collected from English exams for Chinese language learners. Each instance in DREAM contains a multi-turn multi-party dialogue, and the correct answer option must be inferred from the dialogue context. In particular, a large portion of questions require multi-sentence inference (84%) and/or commonsense knowledge (34%).

Implementation Details
We use spaCy (Honnibal and Johnson, 2015) for tokenization and named entity tagging. We use the pre-trained transformer (i.e., GPT) released by Radford et al. (2018) with the same preprocessing procedure. When GPT is used as the neural reader, we set training epochs to 4, use eight P40 GPUs for experiments on RACE, and use one GPU for experiments on other datasets. When GPT is used as the evidence sentence extractor, we set batch size 1 per GPU and dropout rate 0.3. We keep other parameters default. Depending on the dataset, training the evidence sentence extractor generally takes several hours.
To calculate the probability that each sentence leads to the correct answer option, we sample a subset of sentences and use them to replace the full context in each instance, and then we feed them into the transformer fine-tuned with instances with full context. If a particular combination of sentences S = {s 1 , . . . , s n } leads to the prediction of the correct answer option, we reward each sentence inside this set with 1/n. To avoid the combinatorial explosion, we assume evidence sentences lie within window size 3. For another neural reader Co-Matching (Wang et al., 2018b), we use its default parameters. For DREAM and RACE, we set L, the maximum number of silver standard evidence sentences of a question, to 3. For MultiRC, we set L to 5 since many questions have more than 5 ground truth evidence sentences.

Evaluation on MultiRC
Since its test set is not publicly available, currently we only evaluate our model on the development set ( Table 2). The fine-tuned transformer (GPT) baseline, which takes as input the full document, achieves an improvement of 2.2% in macroaverage F1 (F1 m ) over the previous highest score, 66.5%. If we train our evidence sentence extractor using the ground truth evidence sentences provided by turkers, we can obtain a much higher F1 m 72.3%, even after we remove nearly 66% of sentences in average per document. We can regard this result as the supervised upper bound for our evidence sentence extractor. If we train the evidence sentence extractor with DPL as a supervision module, we get 70.5% in F1 m . The performance gap between 70.5% and 72.3% shows there is still room for improving denoising strategies.

Evaluation on RACE
As we cannot find any public implementations of recently published independent sentence selectors, we compare our evidence sentence extractor with InferSent released by Conneau et al. (2017) as previous work (Htut et al., 2018) has shown that it outperforms many state-of-the-art sophisticated sentence selectors on a range of tasks. We also investigate the portability of our evidence sentence extractor by combing it with two neural readers. Besides the fine-tuned GPT baseline, we use Co-Matching (Wang et al., 2018b), another state-ofthe-art neural reader on the RACE dataset.
As shown in Table 3, by using the evidence sentences selected by InferSent, we suffer up to a 1.9% drop in accuracy with Co-Matching and up to a 4.2% drop with the fine-tuned GPT. In comparison, by using the sentences extracted by our sentence extractor, which is trained with DPL as a

Approach
F1m All-ones baseline (Khashabi et al., 2018) 61.0 59.9 0.8 Lucene world baseline (Khashabi et al., 2018) 61.8 59.2 1.4 Lucene paragraphs baseline (Khashabi et al., 2018) 64.3 60.0 7.5 Logistic regression (Khashabi et al., 2018) 66.5 63.2 11.8 Full context + Fine-Tuned Transformer (GPT, Radford et al. (2018)   supervision module, we observe a much smaller decrease (0.1%) in accuracy with the fine-tuned GPT baseline, and we slightly improve the accuracy with the Co-Matching baseline. For questions in RACE, introducing the content of answer options as additional information for evidence sentence extraction can narrow the accuracy gap, which might be due to the fact that many questions are less informative (Xu et al., 2018). Note that all these results are compared with 59% reported from Radford et al. (2018), if compared with our own replication (56.8%), sentence extractor trained with either DPL or distant supervision leads to gain up to 2.1%.
Since the problems in RACE are designed for human participants that require advanced reading comprehension skills such as the utilization of external world knowledge and in-depth reasoning, even human annotators sometimes have difficulties in locating evidence sentences (Section 3.6). Therefore, a limited number of evidence sentences might be insufficient for answering challenging questions. Instead of removing "nonrelevant" sentences, we keep all the sentences in a document while adding a special token before and after the extracted evidence sentences. With DPL as a supervision module, we see an improvement in accuracy of 0.9% (from 58.9% to 59.8%).
For our current supervised upper bound (i.e., assuming we know the correct answer option, we find the silver evidence sentences from rule-based distant supervision and then feed them into the fine-tuned transformer, we get 72.8% in accuracy, which is quite close to the performance of Amazon Turkers. However, it is still much lower than the ceiling performance. To answer questions that require external knowledge, it might be a promising direction to retrieve evidence sentences from external resources, compared to only considering sentences within a reference document for multiple-choice machine reading comprehension tasks.

Evaluation on DREAM
See Table 4 for results on the DREAM dataset. The fine-tuned GPT baseline, which taks as input the full document, achieves 55.1% in accuracy on the test set. If we train our evidence sentence extractor with DPL as a supervision module and feed the extracted evidence sentences to the fine-tuned GPT, we get test accuracy 57.7%. Similarly, if we train the evidence sentence extractor only with silver standard evidence sentences extracted from the rule-based distant supervision method, we obtain test accuracy 56.3%, i.e., 1.4% lower than that with full supervision. Experiments demonstrate the effectiveness of our evidence sentence extractor with denoising strategy, and the usefulness of evidence sentences for dialogue-based machine reading comprehension tasks in which reference documents are less formal compared to those in RACE and MultiRC.

Human Evaluation
Extracted evidence sentences, which help neural readers to find correct answers, may still fail to convince human readers. Thus we evaluate the quality of extracted evidence sentences based on human annotations (Table 5).

Silver Sentences Sentences by ESEDPL
RACE-M 59.9 57.5 MultiRC 53.0 60.8 Table 5: Macro-average F1 compared with human annotated evidence sentences on the dev set (silver sentences: evidence sentences extracted by ILP (Section 2.2); sentences by ESE DPL : evidence sentences extracted by ESE trained on silver stand ground truth, GT: ground truth evidence sentences).
MultiRC: Even trained using the noisy labels, we achieve a macro-average F1 score 60.8% on MultiRC, indicating the learning and generalization capabilities of our evidence sentence extractor, compared to 53.0%, achieved by using the noisy silver standard evidence sentences guided by correct answer options. RACE: Since RACE does not provide the ground truth evidence sentences, to get the ground truth evidence sentences, two human annotators annotate 500 questions from the RACE-Middle development set. 2 The Cohen's kappa coefficient between two annotations is 0.87. For negation questions which include negation words (e.g., "Which statement is not true according to the passage?"), we have two annotation strategies: we can either find sentences that can directly imply the correct answer option; or the sentences that support the wrong answer options. During annotation, for each question, we use the strategy that leads to fewer evidence sentences. We find that even humans have troubles in locating evidence sentences when the relationship between a question and its correct answer option is implicitly implied. For example, a significant number of questions require the understanding of the entire document (e.g., "what's the best title of this passage" and "this passage mainly tells us that ") and/or external knowledge (e.g., "the writer begins with the four questions in order to ", "The passage is probably from " , and "If the writer continues the article, he would most likely write about "). For 10.8% of total questions, at least one annotator leave the slot blank due to the challenges mentioned above. 65.2% of total questions contain at least two evidence sentences, and 70.9% of these questions contain at least one adjacent sentence pair in their evidence sentences, which may provide evidence to support our assumption adjacent sentences tend to have the same label in Section 2.3.
The average and the maximum number of evidence sentences for the remaining questions is 2.1 and 8, respectively. The average number of evidence sentences in the full RACE dataset should be higher since questions in RACE-High are more difficult (Lai et al., 2017), and we ignore 10.8% of the total questions that require the understanding of the whole context.

Error Analysis
We analyze the predicted evidence sentences for instances in RACE for error analysis. Tough with a high macro-average recall (67.9%), it is likely that our method extracts sentences that support distractors. For example, to answer the question "You lost your keys. You may call ", our system mistakenly extracts sentences "Please call 5016666" that support one of the distractors and adjacent to the correct evidence sentences "Found a set of keys. Please call Jane at 5019999." in the given document. We may need linguistic constraints or indicators to filter out irrelevant selected sentences instead of simply setting a hard length constraint such as 5 for all instances in a dataset.
Besides, it is possible that there is no clear sentence in the document for justifying the correctness of the correct answer. For example, to answer the question "What does "figure out" mean ?", neither "find out" nor the correct answer option appears in the given document as this question mainly assesses the vocabulary acquisition of human readers. Therefore, all the extracted sentences (e.g., "sometimes... sometimes I feel lonely, like I'm by myself with no one here.", "sometimes I feel excited, like I have some news I have to share!") by our methods are inappropriate. A possible solution is to predict whether a question is answerable following previous work (e.g., (Hu et al., 2019)) on addressing unanswerable questions in extractive machine reading comprehension tasks such as SQuAD (Rajpurkar et al., 2018) before to extract the evidence sentences for this question.

Sentence Selection for Machine Reading Comprehension and Fact Verification
Previous studies investigate paragraph retrieval for factoid question answering (Chen et al., 2017;Wang et al., 2018c;Choi et al., 2017;Lin et al., 2018), sentence selection for machine reading comprehension Min et al., 2018), and fact verification (Yin and Roth, 2018;Hanselowski et al., 2018). In these tasks, most of the factual questions/claims provide sufficient clues for identifying relevant sentences, thus often information retrieval combined with filters can serve as a very strong baseline. For example, in the FEVER dataset (Thorne et al., 2018), only 16.8% of claims require composition of multiple evidence sentences. For some of the clozestyle machine reading comprehension tasks such as CBT (Hill et al., 2016), Kaushik and Lipton (2018) demonstrate that for some models, comparable performance can be achieved by considering only the last sentence that usually contains the answer. Different from above work, we exploit information in answer options and use various indirect supervision to train our evidence sentence extractor, and previous work can actually be a regarded as a special case for our pipeline. Compared to Lin et al. (2018), we leverage rich linguistic knowledge for denoising imperfect labels. Several work also investigate content selection at the token level (Yu et al., 2017;Seo et al., 2018), in which some tokens are automatically skipped by neural models. However, they do not utilize any linguistic knowledge, and a set of discontinuous tokens has limited explanation capability.

Machine Reading Comprehension with External Linguistic Knowledge
Linguistic knowledge such as coreference resolution, frame semantics, and discourse relations is widely used to improve machine comprehension Sachan et al., 2015;Narasimhan and Barzilay, 2015; especially when there are only hundreds of documents available in a dataset such as MCTest (Richardson et al., 2013). Along with the creation of large-scale reading comprehension datasets, recent machine reading comprehension models rely on end-to-end neural models, and it primarily uses word embeddings as input. However, Wang et al. (2016); Dhingra et al. (2017Dhingra et al. ( , 2018 show that existing neural models do not fully take advantage of the linguistic knowledge, which is still valuable for machine reading comprehension. Besides widely used lexical features such as part-of-speech tags and named entity types (Wang et al., 2016;Dhingra et al., 2017Dhingra et al., , 2018, we consider more diverse types of external knowledge for performance improvements. Moreover, we accommodate external knowledge with probabilistic logic to potentially improve the interpretability of MRC models instead of using external knowledge as additional features.

Explainable Machine Reading Comprehension and Question Answering
To improve the interpretability of question answering, previous work utilize interpretable internal representations (Palangi et al., 2017) or reasoning networks that employ a hop-by-hop reasoning process dynamically (Zhou et al., 2018). A research line focuses on visualizing the whole derivation process from the natural language utterance to the final answer for question answering over knowledge bases (Abujabal et al., 2017) or scientific word algebra problems (Ling et al., 2017). Jansen et al. (2016) extract explanations that describe the inference needed for elementary science questions (e.g., "What form of energy causes an ice cube to melt"). In comparison, the derivation sequence is less apparent for open-domain questions, especially when they require external domain knowledge or multiple-sentence reasoning. To improve explainability, we can also check the attention map learned by neural readers (Wang et al., 2016), however, attention map is learned in end-to-end fashion, which is different from our work. A similar work proposed by Sharp et al. (2017) also uses distant supervision to learn how to extract informative justifications. However, their experiments are primarily designed for factoid question answering, in which it is relatively easy to extract justifications since most questions are informative. In comparison, we focus on multi-choice MRC that requires deep understanding, and we pay particular attention to denoising strategies.

Conclusions
We focus on extracting evidence sentences for multiple-choice MRC tasks, which has not been studied before. We propose to apply distant su-pervision to noisy labels and apply a deep probabilistic logic framework that incorporates linguistic indicators for denoising noisy labels during training. To indirectly evaluate the quality of the extracted evidence sentences, we feed extracted evidence sentences as input to two existing neural readers. Experimental results show that we can achieve comparable or better performance on three multiple-choice MRC datasets, in comparison with the same readers taking as input the entire document. However, there still exist significant differences between the predicted sentences and ground truth sentences selected by humans, indicating the room for further improvements.