Faithful Multimodal Explanation for Visual Question Answering

Jialin Wu, Raymond Mooney


Abstract
AI systems’ ability to explain their reasoning is critical to their utility and trustworthiness. Deep neural networks have enabled significant progress on many challenging problems such as visual question answering (VQA). However, most of them are opaque black boxes with limited explanatory capability. This paper presents a novel approach to developing a high-performing VQA system that can elucidate its answers with integrated textual and visual explanations that faithfully reflect important aspects of its underlying reasoning while capturing the style of comprehensible human explanations. Extensive experimental evaluation demonstrates the advantages of this approach compared to competing methods using both automated metrics and human evaluation.
Anthology ID:
W19-4812
Volume:
Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP
Month:
August
Year:
2019
Address:
Florence, Italy
Editors:
Tal Linzen, Grzegorz Chrupała, Yonatan Belinkov, Dieuwke Hupkes
Venue:
BlackboxNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
103–112
Language:
URL:
https://aclanthology.org/W19-4812
DOI:
10.18653/v1/W19-4812
Bibkey:
Cite (ACL):
Jialin Wu and Raymond Mooney. 2019. Faithful Multimodal Explanation for Visual Question Answering. In Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 103–112, Florence, Italy. Association for Computational Linguistics.
Cite (Informal):
Faithful Multimodal Explanation for Visual Question Answering (Wu & Mooney, BlackboxNLP 2019)
Copy Citation:
PDF:
https://aclanthology.org/W19-4812.pdf
Data
GQA-REXVisual Question Answering