Neural Models of the Psychosemantics of ‘Most’

Lewis O’Sullivan, Shane Steinert-Threlkeld


Abstract
How are the meanings of linguistic expressions related to their use in concrete cognitive tasks? Visual identification tasks show human speakers can exhibit considerable variation in their understanding, representation and verification of certain quantifiers. This paper initiates an investigation into neural models of these psycho-semantic tasks. We trained two types of network – a convolutional neural network (CNN) model and a recurrent model of visual attention (RAM) – on the “most” verification task from Pietroski2009, manipulating the visual scene and novel notions of task duration. Our results qualitatively mirror certain features of human performance (such as sensitivity to the ratio of set sizes, indicating a reliance on approximate number) while differing in interesting ways (such as exhibiting a subtly different pattern for the effect of image type). We conclude by discussing the prospects for using neural models as cognitive models of this and other psychosemantic tasks.
Anthology ID:
W19-2916
Volume:
Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics
Month:
June
Year:
2019
Address:
Minneapolis, Minnesota
Editors:
Emmanuele Chersoni, Cassandra Jacobs, Alessandro Lenci, Tal Linzen, Laurent Prévot, Enrico Santus
Venue:
CMCL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
140–151
Language:
URL:
https://aclanthology.org/W19-2916
DOI:
10.18653/v1/W19-2916
Bibkey:
Cite (ACL):
Lewis O’Sullivan and Shane Steinert-Threlkeld. 2019. Neural Models of the Psychosemantics of ‘Most’. In Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics, pages 140–151, Minneapolis, Minnesota. Association for Computational Linguistics.
Cite (Informal):
Neural Models of the Psychosemantics of ‘Most’ (O’Sullivan & Steinert-Threlkeld, CMCL 2019)
Copy Citation:
PDF:
https://aclanthology.org/W19-2916.pdf
Code
 shanest/neural-vision-most