Explain Yourself! Leveraging Language Models for Commonsense Reasoning

Nazneen Fatema Rajani, Bryan McCann, Caiming Xiong, Richard Socher


Abstract
Deep learning models perform poorly on tasks that require commonsense reasoning, which often necessitates some form of world-knowledge or reasoning over information not immediately present in the input. We collect human explanations for commonsense reasoning in the form of natural language sequences and highlighted annotations in a new dataset called Common Sense Explanations (CoS-E). We use CoS-E to train language models to automatically generate explanations that can be used during training and inference in a novel Commonsense Auto-Generated Explanation (CAGE) framework. CAGE improves the state-of-the-art by 10% on the challenging CommonsenseQA task. We further study commonsense reasoning in DNNs using both human and auto-generated explanations including transfer to out-of-domain tasks. Empirical results indicate that we can effectively leverage language models for commonsense reasoning.
Anthology ID:
P19-1487
Volume:
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics
Month:
July
Year:
2019
Address:
Florence, Italy
Editors:
Anna Korhonen, David Traum, Lluís Màrquez
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4932–4942
Language:
URL:
https://aclanthology.org/P19-1487
DOI:
10.18653/v1/P19-1487
Bibkey:
Cite (ACL):
Nazneen Fatema Rajani, Bryan McCann, Caiming Xiong, and Richard Socher. 2019. Explain Yourself! Leveraging Language Models for Commonsense Reasoning. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4932–4942, Florence, Italy. Association for Computational Linguistics.
Cite (Informal):
Explain Yourself! Leveraging Language Models for Commonsense Reasoning (Rajani et al., ACL 2019)
Copy Citation:
PDF:
https://aclanthology.org/P19-1487.pdf
Video:
 https://aclanthology.org/P19-1487.mp4
Data
CoS-ECommonsenseQASNLISWAGe-SNLI