On Making Reading Comprehension More Comprehensive

Matt Gardner, Jonathan Berant, Hannaneh Hajishirzi, Alon Talmor, Sewon Min


Abstract
Machine reading comprehension, the task of evaluating a machine’s ability to comprehend a passage of text, has seen a surge in popularity in recent years. There are many datasets that are targeted at reading comprehension, and many systems that perform as well as humans on some of these datasets. Despite all of this interest, there is no work that systematically defines what reading comprehension is. In this work, we justify a question answering approach to reading comprehension and describe the various kinds of questions one might use to more fully test a system’s comprehension of a passage, moving beyond questions that only probe local predicate-argument structures. The main pitfall of this approach is that questions can easily have surface cues or other biases that allow a model to shortcut the intended reasoning process. We discuss ways proposed in current literature to mitigate these shortcuts, and we conclude with recommendations for future dataset collection efforts.
Anthology ID:
D19-5815
Volume:
Proceedings of the 2nd Workshop on Machine Reading for Question Answering
Month:
November
Year:
2019
Address:
Hong Kong, China
Editors:
Adam Fisch, Alon Talmor, Robin Jia, Minjoon Seo, Eunsol Choi, Danqi Chen
Venue:
WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
105–112
Language:
URL:
https://aclanthology.org/D19-5815
DOI:
10.18653/v1/D19-5815
Bibkey:
Cite (ACL):
Matt Gardner, Jonathan Berant, Hannaneh Hajishirzi, Alon Talmor, and Sewon Min. 2019. On Making Reading Comprehension More Comprehensive. In Proceedings of the 2nd Workshop on Machine Reading for Question Answering, pages 105–112, Hong Kong, China. Association for Computational Linguistics.
Cite (Informal):
On Making Reading Comprehension More Comprehensive (Gardner et al., 2019)
Copy Citation:
PDF:
https://aclanthology.org/D19-5815.pdf