Tackling Adversarial Examples in QA via Answer Sentence Selection

Yuanhang Ren, Ye Du, Di Wang


Abstract
Question answering systems deteriorate dramatically in the presence of adversarial sentences in articles. According to Jia and Liang (2017), the single BiDAF system (Seo et al., 2016) only achieves an F1 score of 4.8 on the ADDANY adversarial dataset. In this paper, we present a method to tackle this problem via answer sentence selection. Given a paragraph of an article and a corresponding query, instead of directly feeding the whole paragraph to the single BiDAF system, a sentence that most likely contains the answer to the query is first selected, which is done via a deep neural network based on TreeLSTM (Tai et al., 2015). Experiments on ADDANY adversarial dataset validate the effectiveness of our method. The F1 score has been improved to 52.3.
Anthology ID:
W18-2604
Volume:
Proceedings of the Workshop on Machine Reading for Question Answering
Month:
July
Year:
2018
Address:
Melbourne, Australia
Editors:
Eunsol Choi, Minjoon Seo, Danqi Chen, Robin Jia, Jonathan Berant
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
31–36
Language:
URL:
https://aclanthology.org/W18-2604
DOI:
10.18653/v1/W18-2604
Bibkey:
Cite (ACL):
Yuanhang Ren, Ye Du, and Di Wang. 2018. Tackling Adversarial Examples in QA via Answer Sentence Selection. In Proceedings of the Workshop on Machine Reading for Question Answering, pages 31–36, Melbourne, Australia. Association for Computational Linguistics.
Cite (Informal):
Tackling Adversarial Examples in QA via Answer Sentence Selection (Ren et al., ACL 2018)
Copy Citation:
PDF:
https://aclanthology.org/W18-2604.pdf
Data
SQuAD