Ordinal Common-sense Inference

Sheng Zhang, Rachel Rudinger, Kevin Duh, Benjamin Van Durme


Abstract
Humans have the capacity to draw common-sense inferences from natural language: various things that are likely but not certain to hold based on established discourse, and are rarely stated explicitly. We propose an evaluation of automated common-sense inference based on an extension of recognizing textual entailment: predicting ordinal human responses on the subjective likelihood of an inference holding in a given context. We describe a framework for extracting common-sense knowledge from corpora, which is then used to construct a dataset for this ordinal entailment task. We train a neural sequence-to-sequence model on this dataset, which we use to score and generate possible inferences. Further, we annotate subsets of previously established datasets via our ordinal annotation protocol in order to then analyze the distinctions between these and what we have constructed.
Anthology ID:
Q17-1027
Volume:
Transactions of the Association for Computational Linguistics, Volume 5
Month:
Year:
2017
Address:
Cambridge, MA
Editors:
Lillian Lee, Mark Johnson, Kristina Toutanova
Venue:
TACL
SIG:
Publisher:
MIT Press
Note:
Pages:
379–395
Language:
URL:
https://aclanthology.org/Q17-1027
DOI:
10.1162/tacl_a_00068
Bibkey:
Cite (ACL):
Sheng Zhang, Rachel Rudinger, Kevin Duh, and Benjamin Van Durme. 2017. Ordinal Common-sense Inference. Transactions of the Association for Computational Linguistics, 5:379–395.
Cite (Informal):
Ordinal Common-sense Inference (Zhang et al., TACL 2017)
Copy Citation:
PDF:
https://aclanthology.org/Q17-1027.pdf
Data
COPAROCStoriesSICKSNLI