Learning Word Groundings from Humans Facilitated by Robot Emotional Displays

David McNeill, Casey Kennington


Abstract
In working towards accomplishing a human-level acquisition and understanding of language, a robot must meet two requirements: the ability to learn words from interactions with its physical environment, and the ability to learn language from people in settings for language use, such as spoken dialogue. In a live interactive study, we test the hypothesis that emotional displays are a viable solution to the cold-start problem of how to communicate without relying on language the robot does not–indeed, cannot–yet know. We explain our modular system that can autonomously learn word groundings through interaction and show through a user study with 21 participants that emotional displays improve the quantity and quality of the inputs provided to the robot.
Anthology ID:
2020.sigdial-1.13
Volume:
Proceedings of the 21th Annual Meeting of the Special Interest Group on Discourse and Dialogue
Month:
July
Year:
2020
Address:
1st virtual meeting
Editors:
Olivier Pietquin, Smaranda Muresan, Vivian Chen, Casey Kennington, David Vandyke, Nina Dethlefs, Koji Inoue, Erik Ekstedt, Stefan Ultes
Venue:
SIGDIAL
SIG:
SIGDIAL
Publisher:
Association for Computational Linguistics
Note:
Pages:
97–106
Language:
URL:
https://aclanthology.org/2020.sigdial-1.13
DOI:
10.18653/v1/2020.sigdial-1.13
Bibkey:
Cite (ACL):
David McNeill and Casey Kennington. 2020. Learning Word Groundings from Humans Facilitated by Robot Emotional Displays. In Proceedings of the 21th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 97–106, 1st virtual meeting. Association for Computational Linguistics.
Cite (Informal):
Learning Word Groundings from Humans Facilitated by Robot Emotional Displays (McNeill & Kennington, SIGDIAL 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.sigdial-1.13.pdf
Video:
 https://youtube.com/watch?v=xTNbo840EPk
Data
ImageNet