Textbook Question Answering with Multi-modal Context Graph Understanding and Self-supervised Open-set Comprehension

In this work, we introduce a novel algorithm for solving the textbook question answering (TQA) task which describes more realistic QA problems compared to other recent tasks. We mainly focus on two related issues with analysis of the TQA dataset. First, solving the TQA problems requires to comprehend multi-modal contexts in complicated input data. To tackle this issue of extracting knowledge features from long text lessons and merging them with visual features, we establish a context graph from texts and images, and propose a new module f-GCN based on graph convolutional networks (GCN). Second, scientific terms are not spread over the chapters and subjects are split in the TQA dataset. To overcome this so called ‘out-of-domain’ issue, before learning QA problems, we introduce a novel self-supervised open-set learning process without any annotations. The experimental results show that our model significantly outperforms prior state-of-the-art methods. Moreover, ablation studies validate that both methods of incorporating f-GCN for extracting knowledge from multi-modal contexts and our newly proposed self-supervised learning process are effective for TQA problems.


Introduction
In a decade, question answering (QA) has been one of the most promising achievements in the field of natural language processing (NLP). Furthermore, it has shown great potential to be applied to real-world problems. In order to solve more realistic QA problems, input types in datasets have evolved into various combinations. Recently, Visual Question Answering (VQA) has drawn huge attractions as it is in the intersection * Equal contribution. † Corresponding author. This work was supported by Next-Generation Information Computing Development Program through the National Research Foundation of Korea (NRF-2017M3C4A7078547).
Nucleic acid classification fuction of nucleic acid DNA stores genetic information in the cells of all living things. It contains the genetic code. This is the code that instructs cells how to make proteins.  Figure 1: Examples of the textbook question answering task and a brief concept of our work. In this figure, we can see lessons which contain long essays and diagrams in the TQA. Related questions are also illustrated. With a self-supervised method, our model can comprehend contexts converted into context graphs in training and validation sets. Then it learns to solve questions only in the training set in a supervised manner.  Table 1: Comparison of data types in context and question parts for context QA, VQA and TQA. It shows that the data format of the TQA task is the most complicated on both of context and question parts.
of vision and language. However, the Textbook Question Answering (TQA) is a more complex and more realistic problem as shown in Table 1. Compared to context QA and VQA, the TQA uses both text and image inputs in both the context and the question. The TQA task can describe the real-life process of a student who learns new knowledge from books and practices to solve related problems ( Figure 1). It also has several novel characteristics as a realistic dataset. Since the TQA contains visual contents as well as textual contents, it requires to solve multi-modal QA. Moreover, for-mats of questions are various which include both text-related questions and diagram-related questions. In this paper, we focus on the following two major characteristics of the TQA dataset (Kembhavi et al., 2017).
First, compared to other QA datasets, the context part of TQA has more complexity in the aspect of data format and length. Multi-modality of context exists even in non-diagram questions and it requires to comprehend long lessons to obtain knowledge. Therefore, it is important to extract exact knowledge from long texts and arbitrary images. We establish a multi-modal context graph and propose a novel module based on graph convolution networks (GCN) (Kipf and Welling, 2016) to extract proper knowledge for solving questions.
Next, various topics and subjects in the textbooks are spread over chapters and lessons, and most of the knowledge and terminology do not overlap between chapters and subjects are split. Therefore, it is very difficult to solve problems on subjects that have not been studied before. To resolve this problem, we encourage our model to learn novel concepts and terms in a self-supervised manner before learning to solve specific questions.
Our main contributions can be summarized as follows: • We propose a novel architecture which can solve TQA problems that have the highest level of multi-modality.
• We suggest a fusion GCN (f-GCN) to extract knowledge feature from the multi-modal context graph of long lessons and images in the textbook.
• We introduce a novel self-supervised learning process into TQA training to comprehend open-set dataset to tackle the out-of-domain issues.
With the proposed model, we could obtain the state-of-the-art performance on TQA dataset, which shows a large margin compared with the current state-of-the-art methods.
2 Related Work

Context question answering
Context question answering, also known as machine reading comprehension, is a challenging task which requires a machine not only to comprehend natural language but also to reason how to answer the asked question correctly. Large amount of datasets such as MCTest (Richardson et al., 2013), SQuAD (Rajpurkar et al., 2016) or MS Marco (Nguyen et al., 2016) have contributed significantly to the textual reasoning via deep learning approaches. These datasets, however, are restricted to a small set of contents and contain only uni-modal problems requiring only textual information. In addition, these sets require relatively less complex parsing and reasoning compared to TQA dataset (Kembhavi et al., 2017). In this study, we tackle TQA, the practical middle school science problems across multiple modalities, by transforming long essays into customized graphs for solving the questions on a textbook.

Visual question answering
As the intersection of computer vision, NLP and reasoning, visual question answering has drawn attention in the last few years. Most of pioneering works in this area (Xu and Saenko, 2016;Yang et al., 2016;Lu et al., 2016) Figure 3: Overall framework of our model: (a) The preparation step for the k-th answer among n candidates. The context m is determined by TF-IDF score with the question and the k-th answer. Then, the context m is converted to a context graph m. The question and the k-th answer are also embedded by GloVe and character embedding. This step is repeated for n candidates. (b) The embedding step uses RN N C as a sequence embedding module and f-GCN as a graph embedding module. With attention methods, we can obtain combined features. After concatenation, RN N S and the fully connected module predict final distribution in the solving step.
Brown et al., 2018) also have dealt with graph structure to solve VQA problems.

Problem
Formally, our problem can be defined as follows: where C is given contexts which consist of textual and visual contents and q is a given question which can contain question diagrams for diagram problems. θ denotes the trainable parameters. With given C and q, we are to predict the best answerâ among a set of possible answers Ω a . The TQA contexts contain almost all items in textbooks: topic essay, diagrams and images, lesson summaries, vocabularies, and instructional videos. Among them, we mainly use topic essay as textual contexts and diagrams as visual contexts.
Among various issues, the first problem we tackle is the complexity of contexts and variety in data formats as shown in Table 1. Especially, analysis of textual context in Figure 2(a) shows that the average length of contexts in the TQA is 668 words which is almost 5 times larger than that of the SQuAD which has 134 words on average. Also, in (Kembhavi et al., 2017), analysis of information scope in TQA dataset provides two important clues that about 80% of text questions only need 1 paragraph and about 80% of diagram questions only need 1 context image and 1 paragraph. Due to those evidences, we need to add an information retrieval step such as TF-IDF (term frequency-inverse document frequency) to narrow down scope of contexts from a lesson to a paragraph, which significantly reduces the complexity of a problem. Moreover, a graph structure can be suitable to represent logical relations between scientific terms and to merge them with visual contexts from diagrams. As a result, we decide to build a multi-modal context graph and obtain knowledge features from it.
In Figure 2(b), we obtain the percentage of how much the terms in the validation set are appearing in the training set. Obviously, the ratio of the TQA (79%) is lower than that of the SQuAD (84%) which can induce out-of-vocabulary and domain problems more seriously in the TQA task. To avoid aforementioned issues, we apply a novel self-supervised learning process before learning to solve questions.

Proposed Method
Figure 3 illustrates our overall framework which consists of three steps. In a preparation step, we use TF-IDF to select the paragraph most relevant to the given question or candidate answers. Then, we convert it into two types of context graphs for text and image, respectively. In the embedding step, we exploit an RNN (denoted as RNN C in the figure) to embed textual inputs, a question and an answer candidate. Then, we incorporate f-GCN to extract graph features from both the visual and the textual context graphs. After repeating previous steps for each answer candidate, we can stack each of concatenated features from the embedding step. We exploit another RNN (RNN S ) to cope with the variable number of answer candidates which varies from 2 to 7 that can have sequential relations such as "none of the above" or "all of the above" in the last choice. Final fully connected layers decide probabilities of answer candidates. Note that notation policies are included in the supplementary.

Visual and Textual Context graphs
For the visual contexts and the question diagrams, we build a visual context graph using UDPnet (Kim et al., 2018). We obtain names, counts, and relations of entities in diagrams. Then we can establish edges between related entities. Only for question diagrams, we use counts of entities transformed in the form of a sentence such as "There are 5 objects" or "There are 6 stages". We build the textual context graphs using some parts of the lesson where the questions can focus on solving problems as follows. Each lesson can be divided into multiple paragraphs and we extract one paragraph which has the highest TF-IDF score using a concatenation of the question and one of the candidate answers (leftmost of Figure 3(a)).
Then, we build the dependency trees of the extracted paragraph utilizing the Stanford dependency parser (Manning et al., 2014), and designate the words which exist in the question and the candidate answer as anchor nodes. The nodes which have more than two levels of depth difference with anchor nodes are removed and we build the textual context graphs using the remaining nodes and edges (Process 1 in the supplementary).

Graph Understanding using f-GCN
Next, we propose f-GCN to extract combined graph features for visual and textual context graphs as shown in Figure 4. Each of context graphs has its own graph matrix C containing node features and a normalized adjacency matrix which are used as inputs of a GCN to comprehend the contexts. Here, the graph matrix C is composed of the word embeddings and the character representation. First, we extract propagated graph features from both of context graphs based on one-layer GCN as where A t and A d are the adjacency matrices for the text and visual contexts, W t and W d are learning parameters of linear layer for the text and visual contexts, and the element-wise operation σ is the tanh activation function. After that, we use dot product function to get attention matrix Z of visual context H d c against textual context H t c which contains main knowledge. Then we concatenate features of textual context H t c and weighted sum Z T H d c to get entire context features, where [· ; ·] is the concatenation operator. Compared to the textual-context-only case, we can obtain double-sized features which can be more informative. Finally, we use a GCN again to propagate over entire features of context graphs: We denote this module except the last GCN as f-GCN1 (eq. (3)) and the whole module including the last GCN as f-GCN2 (eq. (4)).

Multi-modal Problem Solving
The f-GCN and RNNs are used to embed the contexts and answer the questions as shown in Figure  3(b). Two different RNNs are used in our architecture. One is the comprehending RNN (RNN C ) which can understand questions and candidate answers and the other is the solving RNN (RNN S ) which can answer the questions. The input of the RNN C is comprised of the word embedding, character representation and the occurrence flag for both questions and candidate answers. In word embedding, each word can be represented as e q i /e a i by using a pre-trained word embedding method such as GloVe (Pennington et al., 2014). The character representation c q i /c a i is calculated by feeding randomly initialized character embeddings into a CNN with the max-pooling operation. The occurrence flag f q i /f a i indicates whether the word occurs in the contexts or not. Our final input representation q w i for the question word q i in RNN C is composed of three components as follows: The input representation for the candidate answers is also obtained in the same way as the one for the question. Here, Emb is the trainable word embeddings and Char-CNN is the character-level convolutional network. To extract proper representations for the questions and candidate answers, we apply the step-wise max-pooling operation over the RNN C hidden features. Given each of the question and the candidate answer representations, we use an attention mechanism to focus on the relevant parts of the contexts for solving the problem correctly. The attentive information Att q of the question representation h q against the context features H c as in (3) or (4) is calculated as follows: Here, K is the number of words in the context C which equals the dimension of the square adjacency matrix A. M is the attention matrix that converts the question into the context space. The attentive information of the candidate answers Att a is calculated similar to Att q . RNN S can solve the problems and its input consists of the representations of the question and the candidate answer with their attentive information on the contexts as: where I t RN N S is for the text questions and I d RN N S is for the diagram questions. Finally, based on the outputs of RNN S , we use one fully-connected layer followed by a softmax function to obtain a probability distribution of each candidate answer and optimize those with cross-entropy loss. Self-supervised open-set comprehension step in our model. We set contexts as candidates we should predict for the question and the k-th answer. For each answer, we obtain n context candidates from TF-IDF methods and set the top-1 candidate as the correct context. While we use the same structure as in Figure 3, we can predict final distribution after all the steps.

Self-supervised open-set comprehension
To comprehend out-of-domain contexts, we propose a self-supervised prior learning method as shown in Figure 5. While we exploit the same architecture described in the previous section, we have reversed the role of the candidate answer and the contexts in (1) as a self-supervised one.
In other words, we set the problem as inferring the Top-1 context for the chosen answer candidate. We assume TF-IDF to be quite reliable in measuring closeness between texts. The newly defined self-supervised problem can be formalized as follows: where A k is given k-th answer candidate among n candidates and q is the given question. Then we infer the most related contextĉ among a set of contexts Ω c in a lesson.
For each candidate answer A k (k = 1, .., n), we get the set of paragraphs Ω c of size j from the corresponding context. Here, Ω c is obtained by calculating TF-IDF between [q; A k ] and each paragraph ω, i.e., T ω = tf-idf([q; A k ], ω), and selecting the top-j paragraphs. Among the j paragraphs ω i (i = 1, · · · , j) in Ω c , the one with the highest TF-IDF score is set as the ground truth: With A k , q and ω i ∈ Ω c , we conduct the same process in eq. (2-7) to obtain the i-th input of the After repeating it j times, we put all I i RN N S , (i = 1 · · · , j) into RN N S sequentially and optimize this step with the cross-entropy loss. We repeatedly choose all answer candidates A k , and conduct the same process in this step.
With this pre-training stage which shares parameters with the supervised stage, we expect that our model can deal with almost all contexts in a lesson. Moreover, it becomes possible to learn contexts in the validation set or the test set with a self-supervised manner. This step is analogous to a student who reads and understands a textbook and problems in advance.

Dataset
We perform experiments on the TQA dataset, which consists of 1,076 lessons from Life Science, Earth Science and Physical Science textbooks. While the dataset contains 78,338 sentences and 3,455 images including diagrams, it also has 26,260 questions with 12,567 of them having an accompanying diagram, split into training, validation and test at a lesson level. The training set consists of 666 lessons and 15,154 questions, the validation set consists of 200 lessons and 5,309 questions and the test set consists of 210 lessons and 5,797 questions. Since evaluation for test is hidden, we only use the validation set to evaluate our methods.

Baselines
We compare our method with several recent methods as followings:

Comparison of Results
Overall results on TQA dataset are shown in Table  2. The results show that all variants of our model outperform other recent models in all type of question. Our best model shows about 4% higher than state-of-the-art model in overall accuracy. Especially, an accuracy in text question significantly outperforms other results with about 8% margin. A result on diagram questions also shows more than 1% increase over the previous best model. We believe that our two novel proposals, context graph understanding and self-supervised open-set comprehension work well on this problem since our models achieve significant margins compared to recent researches. Even though our model w/o visual context only uses one-layer GCN for textual context, it shows better result compared to MemN+VQA and MemN+DPG with a large margin and IGMN with about 3% margin. IGMN also exploits a graph module of contraction, but ours outperforms especially in both text problems, T/F and MC with over 5% margin. We believe that the graph in our method can directly represents the feature of context and the GCN also plays an important role in extracting the features of our graph.
Our models with multi-modal contexts show significantly better results on both text and diagram questions. Especially, results of diagram question outperform over 1% rather than our model w/o visual context. Those results indicate that f-GCN sufficiently exploits visual contexts to solve diagram questions.

Ablation Study
We perform ablation experiments in Table 2. Our full model w/ f-GCN2 can achieve best score on diagram questions but slightly lower scores on text questions. Since the overall result of our full model records the best, we conduct ablation study of each module of it.
First, we observe an apparent decrease in our model when any part of modules is elimi-  nated. It is surprising that self-supervised openset comprehension method provides an improvement on our model. Our full model shows about 2% higher performance than the model without SSOC(TR+VAL). It is also interesting to compare our full model with our model without SSOC(VAL). The results show that using the additional validation set on SSOC can improve overall accuracy compared to using only training set. It seems to have more advantage for learning unknown dataset in advance.
Our model without f-GCN & SSOC eliminates our two novel modules and replace GCN with vanilla RNN. That model shows 1% of performance degradation compared with the model without SSOC(TR+VAL) which means that it might not sufficient to deal with knowledge features with only RNN and attention module. Thus, context graph we create for each lesson could give proper representations with f-GCN module. Table 3 shows the results of ablation study about occurrence flag. All models do not use SSOC method. In (5), we concatenate three components including the occurrence flag to create question or answer representation. We found that the occurrence flag which explicitly indicates the existence of a corresponding word in the contexts has a meaningful effect. Results of all types degrade significantly as ablating occurrence flags. Especially, eliminating a-flag drops accuracy about 7% which is almost 4 times higher than the decrease due to eliminating f-flag. We believe that disentangled features of answer candidates can mainly determine the results while a question feature equally affects all features of candidates. Our model without both flags shows the lowest results due to the loss of representational power. Figure 6 shows three qualitative results of texttype questions without visual context. We illustrate textual contexts, questions, answer candidates and related subgraphs of context graphs.

Qualitative Results
The first example describes a pipeline on a runoff carved channels in the soil in figure  19.  the dense , iron core forms the center of the earth . scientists know that the core is metal from studying metallic meteorites and the earths density . seismic waves show that the outer core is liquid , while the inner core is solid . movement within earths outer liquid iron core creates earths magnetic field . these convection currents form in the outer core because the base of the outer core is heated by the even hotter inner core .
convection currents occur in the inner core .  Figure 6: Qualitative results of text-type questions without visual context. Each example shows all items for a question in the textbook and a textual context subgraph to solve a question. And our predicted distribution for answers and ground truths are also displayed. In the subgraph, gray circles represent words in questions and blue circles represent words related to answers. Green rectangles represent relation types of the dependency graph.
earthquakes are used to identify plate boundaries ( figure 6.14 ) . when earthquake locations are put on a map , they outline the plates .  T/F question. Three words, "currents", "core" and "convection" are set as anchor nodes as shown in the left of Figure 6. Within two levels of depth, we can find "outer" node which is the opposite to "inner" in the question sentence. As a result, our model predicts the true and false probabilities of this question as 0.464 and 0.536, respectively, and correctly solves this problem as a false statement. Next example is a multiple choice problem which is more complicated than T/F problem. With anchor nodes which consist of each answer candidate and a question such as "causes", "erosion" and "soil", the context graph can be established including nodes in two depth of graph from anchor nodes. Among the 4 candidates, choice (d) contains the same words, "running" and "water", as our model predicts. Therefore, our model can estimate (d) as the correct answer with the highest probability of 0.455. The last example shows a more complicated multiple choice problem. In the context graph, we set "organelle", "recycles", "molecules" and "unneeded" as anchor nodes with each word in answer candidates. Then we can easily find an important term, "lysosome" in choice (a). Therfore, choice (a) has a probability close to one among 7 candidates. Figure 7 demonstrates qualitative results of diagram questions. We exclude relation type nodes in subgraphs of the dependency tree for simplicity and also illustrate diagram parsing graphs of visual contexts and question diagram. The example in the top shows intermediate results of subgraphs on a diagram question without visual context. Even though chosen paragraph in textual context do not include "asthenosphere", graph of a question diagram contain relation between "asthenosphere" and "lithosphere". Then our model can predict (a) as the correct answer with probability of 0.383. The bottom illustration describes the most complex case which has diagrams in both of context and question parts. We illustrate all subgraphs of text and diagrams. While our model can collect sufficient knowledge about cell structure on broad information scope, "cell membrane" can be chosen as correct answer with the highest probability.
These examples demonstrate abstraction ability and relationship expressiveness which can be huge advantages of graphs. Moreover, those results could support that our model can explicitly interpret the process of solving multi-modal QA.

Conclusion
In this paper, we proposed two novel methods to solve a realistic task, TQA dataset. We extract knowledge features with the proposed f-GCN and conduct self-supervised learning to overcome the out-of-domain issue. Our method also demonstrates state-of-the-art results. We believe that our work can be a meaningful step in realistic multimodal QA and solving the out-of-domain issue.

A Notations
We denote the question text, question diagram, candidate answer, text context and diagram context as Q t = {q t 1 , q t 2 , · · · , q t I }, is the i th /j th /k th /l th /m th word of the question text Q t and the question diagram Q d , candidate answer A, text context C t and diagram context C d (C is unified notation for the C t and C d ).
The corresponding representations are denoted as h t q ,h d q , h a , H t c and H d c , respectively. Note that we use the diagram context C d only in the diagram questions.

B Implementation Details
We initialized word embedding with 300d GloVe vectors pre-trained from the 840B Common Crawl corpus, while the word embeddings for the outof-vocabulary words were initialized randomly. We also randomly initialized character embedding with a 16d vector and extracted 32d character representation with a 1D convolutional network. And the 1D convolution kernel size is 5. We used 200 hidden units of Bi-LSTM for the RNN c whose weights are shared between the question and the candidate answers. The maximum sequence length of them is set to 30. Likewise, the number of hidden units of the RNN s is the same as the RNN c and the maximum sequence length is 7 which is the same as the number of the maximum candidate answers. We employed 200d one layer GCN for all types of graphs, and the number of maximum nodes is 75 for the textual context graph, 35 for the diagrammatic context graph, and 25 for the diagrammatic question graph, respectively. We use tanh for the activation function of the GCN. The dropout was applied after all of the word embeddings with a keep rate of 0.5. The Adam optimizer with an initial learning rate of 0.001 was applied, and the learning rate was decreased by a factor of 0.9 after each epoch.

C Additional explanation for SSOC
In Figure 8, we illustrate examples about detailed steps of SSOC. In the first step, we select one can-didate answer from question-candidate answers pairs (2). Next, we choose a number j, the number of candidate contexts for the pair of questioncandidate answer, in the range 2 to 7 like the original dataset (3). If j is higher than the number of contexts in the lesson, we set j to be the number of contexts. Then, we extract top j paragraphs using the TF-IDF scores to set them as candidate contexts Ω c (3). We build each context graph in the same way as the original method and get embeddings with the question-candidate answer pair we selected. Finally, we designate the final candidate which connects to the top 1 paragraph as a correct answer, and others as wrong answers (4).

D Results of additional ablation study
We perform additional ablation studies for variants of our model. For both our full model without visual context and our full model with f-GCN2, results of ablation studies are shown in Table 4. Both studies seem to demonstrate similar tendency as performances are degraded for ablating each module. We can conclude that our two novel modules have sufficient contributions to improve the performance our model in the TQA problem.

E Process of Building Textual Context Graph
The procedure for converting the textual context into the graph structures is shown in Process 1. After constructing the dependency trees, we set the nodes included in the question or the candidate answer as anchor nodes and built the final context graph C by removing the nodes which have more than two levels of depth difference with anchor nodes. We also constructed the adjacency matrix A using the remaining nodes and edges.
Process 1 Build textual context and adjacency matrices C, A Input: a paragraph, a set of anchor nodes V 1: Construct a dependency tree on each sentence of the given paragraph 2: Split the tree into multiple units each of which represents two nodes and one edge u = {v 1 , v 2 } 3: U ← a set of units 4: E ← an empty set of edges 5: for depth ← 1 to 2 do V ← a set of all nodes in E 14: end for Output: context matrix C from V with embedding matrices, adjacency matrix A from E

F Additional Qualitative Results
In next pages, we present additional qualitative results of questions in three types. We explicitly demonstrates all intermediate results as subgraphs of visual context and question diagram. Note that we add a legend that indicates which types of data are used in this figure to avoid confusion. In Figure 9 and Figure 10, we illustrate intermediate and final results on text-type question with visual context. Next, we demonstrate intermediate and final results on diagram-type question without visual context in Figure 11 and Figure 12. Finally, we present intermediate and final results of the most complicated type, diagram-type question with visual context in Figure 13 and Figure 14. We hope the logical connectivity for solving the problem and how our model works well on the TQA problem are sufficiently understood with those figures.