AdvEntuRe: Adversarial Training for Textual Entailment with Knowledge-Guided Examples

Dongyeop Kang, Tushar Khot, Ashish Sabharwal, Eduard Hovy


Abstract
We consider the problem of learning textual entailment models with limited supervision (5K-10K training examples), and present two complementary approaches for it. First, we propose knowledge-guided adversarial example generators for incorporating large lexical resources in entailment models via only a handful of rule templates. Second, to make the entailment model—a discriminator—more robust, we propose the first GAN-style approach for training it using a natural language example generator that iteratively adjusts to the discriminator’s weaknesses. We demonstrate effectiveness using two entailment datasets, where the proposed methods increase accuracy by 4.7% on SciTail and by 2.8% on a 1% sub-sample of SNLI. Notably, even a single hand-written rule, negate, improves the accuracy of negation examples in SNLI by 6.1%.
Anthology ID:
P18-1225
Volume:
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
July
Year:
2018
Address:
Melbourne, Australia
Editors:
Iryna Gurevych, Yusuke Miyao
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2418–2428
Language:
URL:
https://aclanthology.org/P18-1225
DOI:
10.18653/v1/P18-1225
Bibkey:
Cite (ACL):
Dongyeop Kang, Tushar Khot, Ashish Sabharwal, and Eduard Hovy. 2018. AdvEntuRe: Adversarial Training for Textual Entailment with Knowledge-Guided Examples. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2418–2428, Melbourne, Australia. Association for Computational Linguistics.
Cite (Informal):
AdvEntuRe: Adversarial Training for Textual Entailment with Knowledge-Guided Examples (Kang et al., ACL 2018)
Copy Citation:
PDF:
https://aclanthology.org/P18-1225.pdf
Note:
 P18-1225.Notes.pdf
Poster:
 P18-1225.Poster.pdf
Code
 dykang/adventure
Data
SICKSNLISQuAD