Learning to Search in Long Documents Using Document Structure

Mor Geva, Jonathan Berant


Abstract
Reading comprehension models are based on recurrent neural networks that sequentially process the document tokens. As interest turns to answering more complex questions over longer documents, sequential reading of large portions of text becomes a substantial bottleneck. Inspired by how humans use document structure, we propose a novel framework for reading comprehension. We represent documents as trees, and model an agent that learns to interleave quick navigation through the document tree with more expensive answer extraction. To encourage exploration of the document tree, we propose a new algorithm, based on Deep Q-Network (DQN), which strategically samples tree nodes at training time. Empirically we find our algorithm improves question answering performance compared to DQN and a strong information-retrieval (IR) baseline, and that ensembling our model with the IR baseline results in further gains in performance.
Anthology ID:
C18-1014
Volume:
Proceedings of the 27th International Conference on Computational Linguistics
Month:
August
Year:
2018
Address:
Santa Fe, New Mexico, USA
Editors:
Emily M. Bender, Leon Derczynski, Pierre Isabelle
Venue:
COLING
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
161–176
Language:
URL:
https://aclanthology.org/C18-1014
DOI:
Bibkey:
Cite (ACL):
Mor Geva and Jonathan Berant. 2018. Learning to Search in Long Documents Using Document Structure. In Proceedings of the 27th International Conference on Computational Linguistics, pages 161–176, Santa Fe, New Mexico, USA. Association for Computational Linguistics.
Cite (Informal):
Learning to Search in Long Documents Using Document Structure (Geva & Berant, COLING 2018)
Copy Citation:
PDF:
https://aclanthology.org/C18-1014.pdf
Code
 mega002/DocQN