Recurrent Neural Network-Based Sentence Encoder with Gated Attention for Natural Language Inference

Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Si Wei, Hui Jiang, Diana Inkpen


Abstract
The RepEval 2017 Shared Task aims to evaluate natural language understanding models for sentence representation, in which a sentence is represented as a fixed-length vector with neural networks and the quality of the representation is tested with a natural language inference task. This paper describes our system (alpha) that is ranked among the top in the Shared Task, on both the in-domain test set (obtaining a 74.9% accuracy) and on the cross-domain test set (also attaining a 74.9% accuracy), demonstrating that the model generalizes well to the cross-domain data. Our model is equipped with intra-sentence gated-attention composition which helps achieve a better performance. In addition to submitting our model to the Shared Task, we have also tested it on the Stanford Natural Language Inference (SNLI) dataset. We obtain an accuracy of 85.5%, which is the best reported result on SNLI when cross-sentence attention is not allowed, the same condition enforced in RepEval 2017.
Anthology ID:
W17-5307
Volume:
Proceedings of the 2nd Workshop on Evaluating Vector Space Representations for NLP
Month:
September
Year:
2017
Address:
Copenhagen, Denmark
Editors:
Samuel Bowman, Yoav Goldberg, Felix Hill, Angeliki Lazaridou, Omer Levy, Roi Reichart, Anders Søgaard
Venue:
RepEval
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
36–40
Language:
URL:
https://aclanthology.org/W17-5307
DOI:
10.18653/v1/W17-5307
Bibkey:
Cite (ACL):
Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Si Wei, Hui Jiang, and Diana Inkpen. 2017. Recurrent Neural Network-Based Sentence Encoder with Gated Attention for Natural Language Inference. In Proceedings of the 2nd Workshop on Evaluating Vector Space Representations for NLP, pages 36–40, Copenhagen, Denmark. Association for Computational Linguistics.
Cite (Informal):
Recurrent Neural Network-Based Sentence Encoder with Gated Attention for Natural Language Inference (Chen et al., RepEval 2017)
Copy Citation:
PDF:
https://aclanthology.org/W17-5307.pdf
Code
 additional community code
Data
MultiNLISNLI