Do Transformers Dream of Inference, or Can Pretrained Generative Models Learn Implicit Inferential Rules?

Zhengzhong Liang, Mihai Surdeanu


Abstract
Large pretrained language models (LM) have been used successfully for multi-hop question answering. However, most of these directions are not interpretable, as they do not make the inference hops necessary to explain a candidate answer explicitly. In this work, we investigate the capability of a state-of-the-art transformer LM to generate explicit inference hops, i.e., to infer a new statement necessary to answer a question given some premise input statements. Our analysis shows that such LMs can generate new statements for some simple inference types, but performance remains poor for complex, real-world inference types such as those that require monotonicity, composition, and commonsense knowledge.
Anthology ID:
2020.insights-1.12
Volume:
Proceedings of the First Workshop on Insights from Negative Results in NLP
Month:
November
Year:
2020
Address:
Online
Editors:
Anna Rogers, João Sedoc, Anna Rumshisky
Venue:
insights
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
76–81
Language:
URL:
https://aclanthology.org/2020.insights-1.12
DOI:
10.18653/v1/2020.insights-1.12
Bibkey:
Cite (ACL):
Zhengzhong Liang and Mihai Surdeanu. 2020. Do Transformers Dream of Inference, or Can Pretrained Generative Models Learn Implicit Inferential Rules?. In Proceedings of the First Workshop on Insights from Negative Results in NLP, pages 76–81, Online. Association for Computational Linguistics.
Cite (Informal):
Do Transformers Dream of Inference, or Can Pretrained Generative Models Learn Implicit Inferential Rules? (Liang & Surdeanu, insights 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.insights-1.12.pdf
Optional supplementary material:
 2020.insights-1.12.OptionalSupplementaryMaterial.zip
Video:
 https://slideslive.com/38940799
Data
QASC