Quantifiers in a Multimodal World: Hallucinating Vision with Language and Sound

Alberto Testoni, Sandro Pezzelle, Raffaella Bernardi


Abstract
Inspired by the literature on multisensory integration, we develop a computational model to ground quantifiers in perception. The model learns to pick, out of nine quantifiers (‘few’, ‘many’, ‘all’, etc.), the one that is more likely to describe the percent of animals in a visual-auditory input containing both animals and artifacts. We show that relying on concurrent sensory inputs increases model performance on the quantification task. Moreover, we evaluate the model in a situation in which only the auditory modality is given, while the visual one is ‘hallucinanted’ either from the auditory input itself or from a linguistic caption describing the quantity of entities in the auditory input. This way, the model exploits prior associations between modalities. We show that the model profits from the prior knowledge and outperforms the auditory-only setting.
Anthology ID:
W19-2912
Volume:
Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics
Month:
June
Year:
2019
Address:
Minneapolis, Minnesota
Editors:
Emmanuele Chersoni, Cassandra Jacobs, Alessandro Lenci, Tal Linzen, Laurent Prévot, Enrico Santus
Venue:
CMCL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
105–116
Language:
URL:
https://aclanthology.org/W19-2912
DOI:
10.18653/v1/W19-2912
Bibkey:
Cite (ACL):
Alberto Testoni, Sandro Pezzelle, and Raffaella Bernardi. 2019. Quantifiers in a Multimodal World: Hallucinating Vision with Language and Sound. In Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics, pages 105–116, Minneapolis, Minnesota. Association for Computational Linguistics.
Cite (Informal):
Quantifiers in a Multimodal World: Hallucinating Vision with Language and Sound (Testoni et al., CMCL 2019)
Copy Citation:
PDF:
https://aclanthology.org/W19-2912.pdf
Data
AudioSetVisual Question Answering