Neural Natural Language Inference Models Enhanced with External Knowledge

Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Diana Inkpen, Si Wei


Abstract
Modeling natural language inference is a very challenging task. With the availability of large annotated data, it has recently become feasible to train complex models such as neural-network-based inference models, which have shown to achieve the state-of-the-art performance. Although there exist relatively large annotated data, can machines learn all knowledge needed to perform natural language inference (NLI) from these data? If not, how can neural-network-based NLI models benefit from external knowledge and how to build NLI models to leverage it? In this paper, we enrich the state-of-the-art neural natural language inference models with external knowledge. We demonstrate that the proposed models improve neural NLI models to achieve the state-of-the-art performance on the SNLI and MultiNLI datasets.
Anthology ID:
P18-1224
Volume:
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
July
Year:
2018
Address:
Melbourne, Australia
Editors:
Iryna Gurevych, Yusuke Miyao
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2406–2417
Language:
URL:
https://aclanthology.org/P18-1224
DOI:
10.18653/v1/P18-1224
Bibkey:
Cite (ACL):
Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Diana Inkpen, and Si Wei. 2018. Neural Natural Language Inference Models Enhanced with External Knowledge. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2406–2417, Melbourne, Australia. Association for Computational Linguistics.
Cite (Informal):
Neural Natural Language Inference Models Enhanced with External Knowledge (Chen et al., ACL 2018)
Copy Citation:
PDF:
https://aclanthology.org/P18-1224.pdf
Poster:
 P18-1224.Poster.pdf
Code
 lukecq1231/kim
Data
MultiNLISNLI