Textual Entailment with Structured Attentions and Composition

Kai Zhao, Liang Huang, Mingbo Ma


Abstract
Deep learning techniques are increasingly popular in the textual entailment task, overcoming the fragility of traditional discrete models with hard alignments and logics. In particular, the recently proposed attention models (Rocktäschel et al., 2015; Wang and Jiang, 2015) achieves state-of-the-art accuracy by computing soft word alignments between the premise and hypothesis sentences. However, there remains a major limitation: this line of work completely ignores syntax and recursion, which is helpful in many traditional efforts. We show that it is beneficial to extend the attention model to tree nodes between premise and hypothesis. More importantly, this subtree-level attention reveals information about entailment relation. We study the recursive composition of this subtree-level entailment relation, which can be viewed as a soft version of the Natural Logic framework (MacCartney and Manning, 2009). Experiments show that our structured attention and entailment composition model can correctly identify and infer entailment relations from the bottom up, and bring significant improvements in accuracy.
Anthology ID:
C16-1212
Volume:
Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers
Month:
December
Year:
2016
Address:
Osaka, Japan
Editors:
Yuji Matsumoto, Rashmi Prasad
Venue:
COLING
SIG:
Publisher:
The COLING 2016 Organizing Committee
Note:
Pages:
2248–2258
Language:
URL:
https://aclanthology.org/C16-1212
DOI:
Bibkey:
Cite (ACL):
Kai Zhao, Liang Huang, and Mingbo Ma. 2016. Textual Entailment with Structured Attentions and Composition. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 2248–2258, Osaka, Japan. The COLING 2016 Organizing Committee.
Cite (Informal):
Textual Entailment with Structured Attentions and Composition (Zhao et al., COLING 2016)
Copy Citation:
PDF:
https://aclanthology.org/C16-1212.pdf
Code
 kaayy/structured-attention
Data
SNLI