Joint Modeling for Query Expansion and Information Extraction with Reinforcement Learning

Motoki Taniguchi, Yasuhide Miura, Tomoko Ohkuma


Abstract
Information extraction about an event can be improved by incorporating external evidence. In this study, we propose a joint model for pseudo-relevance feedback based query expansion and information extraction with reinforcement learning. Our model generates an event-specific query to effectively retrieve documents relevant to the event. We demonstrate that our model is comparable or has better performance than the previous model in two publicly available datasets. Furthermore, we analyzed the influences of the retrieval effectiveness in our model on the extraction performance.
Anthology ID:
W18-5506
Volume:
Proceedings of the First Workshop on Fact Extraction and VERification (FEVER)
Month:
November
Year:
2018
Address:
Brussels, Belgium
Editors:
James Thorne, Andreas Vlachos, Oana Cocarascu, Christos Christodoulopoulos, Arpit Mittal
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
34–39
Language:
URL:
https://aclanthology.org/W18-5506
DOI:
10.18653/v1/W18-5506
Bibkey:
Cite (ACL):
Motoki Taniguchi, Yasuhide Miura, and Tomoko Ohkuma. 2018. Joint Modeling for Query Expansion and Information Extraction with Reinforcement Learning. In Proceedings of the First Workshop on Fact Extraction and VERification (FEVER), pages 34–39, Brussels, Belgium. Association for Computational Linguistics.
Cite (Informal):
Joint Modeling for Query Expansion and Information Extraction with Reinforcement Learning (Taniguchi et al., EMNLP 2018)
Copy Citation:
PDF:
https://aclanthology.org/W18-5506.pdf