Human Attention in Visual Question Answering: Do Humans and Deep Networks look at the same regions?

Abhishek Das1, Harsh Agrawal1, Larry Zitnick2, Devi Parikh3, Dhruv Batra1
1Virginia Tech, 2Facebook, 3Georgia Institute of Technology


Abstract

We conduct large-scale studies on `human attention' in Visual Question Answering (VQA) to understand where humans choose to look to answer questions about images. We design and test multiple game-inspired novel attention-annotation interfaces that require the subject to sharpen regions of a blurred image to answer a question. Thus, we introduce the VQA-HAT (Human ATtention) dataset. We evaluate attention maps generated by state-of-the-art VQA models against human attention both qualitatively (via visualizations) and quantitatively (via rank-order correlation). Overall, our experiments show that current VQA attention models do not seem to be looking at the same regions as humans.