Multimodal Differential Network for Visual Question Generation

Badri Narayana Patro, Sandeep Kumar, Vinod Kumar Kurmi, Vinay Namboodiri


Abstract
Generating natural questions from an image is a semantic task that requires using visual and language modality to learn multimodal representations. Images can have multiple visual and language contexts that are relevant for generating questions namely places, captions, and tags. In this paper, we propose the use of exemplars for obtaining the relevant context. We obtain this by using a Multimodal Differential Network to produce natural and engaging questions. The generated questions show a remarkable similarity to the natural questions as validated by a human study. Further, we observe that the proposed approach substantially improves over state-of-the-art benchmarks on the quantitative metrics (BLEU, METEOR, ROUGE, and CIDEr).
Anthology ID:
D18-1434
Volume:
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing
Month:
October-November
Year:
2018
Address:
Brussels, Belgium
Editors:
Ellen Riloff, David Chiang, Julia Hockenmaier, Jun’ichi Tsujii
Venue:
EMNLP
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
4002–4012
Language:
URL:
https://aclanthology.org/D18-1434
DOI:
10.18653/v1/D18-1434
Bibkey:
Cite (ACL):
Badri Narayana Patro, Sandeep Kumar, Vinod Kumar Kurmi, and Vinay Namboodiri. 2018. Multimodal Differential Network for Visual Question Generation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4002–4012, Brussels, Belgium. Association for Computational Linguistics.
Cite (Informal):
Multimodal Differential Network for Visual Question Generation (Patro et al., EMNLP 2018)
Copy Citation:
PDF:
https://aclanthology.org/D18-1434.pdf
Attachment:
 D18-1434.Attachment.zip
Data
MS COCOVQGVisual Question Answering