Jointly Learning to Parse and Perceive: Connecting Natural Language to the Physical World

Jayant Krishnamurthy, Thomas Kollar


Abstract
This paper introduces Logical Semantics with Perception (LSP), a model for grounded language acquisition that learns to map natural language statements to their referents in a physical environment. For example, given an image, LSP can map the statement “blue mug on the table” to the set of image segments showing blue mugs on tables. LSP learns physical representations for both categorical (“blue,” “mug”) and relational (“on”) language, and also learns to compose these representations to produce the referents of entire statements. We further introduce a weakly supervised training procedure that estimates LSP’s parameters using annotated referents for entire statements, without annotated referents for individual words or the parse structure of the statement. We perform experiments on two applications: scene understanding and geographical question answering. We find that LSP outperforms existing, less expressive models that cannot represent relational language. We further find that weakly supervised training is competitive with fully supervised training while requiring significantly less annotation effort.
Anthology ID:
Q13-1016
Volume:
Transactions of the Association for Computational Linguistics, Volume 1
Month:
Year:
2013
Address:
Cambridge, MA
Editors:
Dekang Lin, Michael Collins
Venue:
TACL
SIG:
Publisher:
MIT Press
Note:
Pages:
193–206
Language:
URL:
https://aclanthology.org/Q13-1016
DOI:
10.1162/tacl_a_00220
Bibkey:
Cite (ACL):
Jayant Krishnamurthy and Thomas Kollar. 2013. Jointly Learning to Parse and Perceive: Connecting Natural Language to the Physical World. Transactions of the Association for Computational Linguistics, 1:193–206.
Cite (Informal):
Jointly Learning to Parse and Perceive: Connecting Natural Language to the Physical World (Krishnamurthy & Kollar, TACL 2013)
Copy Citation:
PDF:
https://aclanthology.org/Q13-1016.pdf