Language-Conditioned Feature Pyramids for Visual Selection Tasks

Taichi Iki, Akiko Aizawa


Abstract
Referring expression comprehension, which is the ability to locate language to an object in an image, plays an important role in creating common ground. Many models that fuse visual and linguistic features have been proposed. However, few models consider the fusion of linguistic features with multiple visual features with different sizes of receptive fields, though the proper size of the receptive field of visual features intuitively varies depending on expressions. In this paper, we introduce a neural network architecture that modulates visual features with varying sizes of receptive field by linguistic features. We evaluate our architecture on tasks related to referring expression comprehension in two visual dialogue games. The results show the advantages and broad applicability of our architecture. Source code is available at https://github.com/Alab-NII/lcfp .
Anthology ID:
2020.findings-emnlp.420
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2020
Month:
November
Year:
2020
Address:
Online
Editors:
Trevor Cohn, Yulan He, Yang Liu
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4687–4697
Language:
URL:
https://aclanthology.org/2020.findings-emnlp.420
DOI:
10.18653/v1/2020.findings-emnlp.420
Bibkey:
Cite (ACL):
Taichi Iki and Akiko Aizawa. 2020. Language-Conditioned Feature Pyramids for Visual Selection Tasks. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 4687–4697, Online. Association for Computational Linguistics.
Cite (Informal):
Language-Conditioned Feature Pyramids for Visual Selection Tasks (Iki & Aizawa, Findings 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.findings-emnlp.420.pdf
Video:
 https://slideslive.com/38940091
Code
 Alab-NII/lcfp
Data
GuessWhat?!