On Generating Characteristic-rich Question Sets for QA Evaluation

Yu Su1, Huan Sun2, Brian Sadler3, Mudhakar Srivatsa4, Izzeddin Gur1, Zenghui Yan1, Xifeng Yan1
1University of California Santa Barbara, 2The Ohio State University, 3U.S. Army Research Lab, 4IBM Research


Abstract

We present a semi-automated framework for constructing factoid question answering (QA) datasets, where an array of question characteristics are formalized, including structure complexity, function, commonness, answer cardinality, and paraphrasing. Instead of collecting questions and manually characterizing them, we employ a reverse procedure, first generating a kind of graph-structured logical forms from a knowledge base, and then converting them into questions. Our work is the first to generate questions with explicitly specified characteristics for QA evaluation. We construct a new QA dataset with over 5,000 logical form-question pairs, associated with answers from the knowledge base, and show that datasets constructed in this way enable fine-grained analyses of QA systems. The dataset can be found in https://github.com/ysu1989/GraphQuestions.