SMILEE: Symmetric Multi-modal Interactions with Language-gesture Enabled (AI) Embodiment

Sujeong Kim, David Salter, Luke DeLuccia, Kilho Son, Mohamed R. Amer, Amir Tamrakar


Abstract
We demonstrate an intelligent conversational agent system designed for advancing human-machine collaborative tasks. The agent is able to interpret a user’s communicative intent from both their verbal utterances and non-verbal behaviors, such as gestures. The agent is also itself able to communicate both with natural language and gestures, through its embodiment as an avatar thus facilitating natural symmetric multi-modal interactions. We demonstrate two intelligent agents with specialized skills in the Blocks World as use-cases of our system.
Anthology ID:
N18-5018
Volume:
Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Demonstrations
Month:
June
Year:
2018
Address:
New Orleans, Louisiana
Editors:
Yang Liu, Tim Paek, Manasi Patwardhan
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
86–90
Language:
URL:
https://aclanthology.org/N18-5018
DOI:
10.18653/v1/N18-5018
Bibkey:
Cite (ACL):
Sujeong Kim, David Salter, Luke DeLuccia, Kilho Son, Mohamed R. Amer, and Amir Tamrakar. 2018. SMILEE: Symmetric Multi-modal Interactions with Language-gesture Enabled (AI) Embodiment. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Demonstrations, pages 86–90, New Orleans, Louisiana. Association for Computational Linguistics.
Cite (Informal):
SMILEE: Symmetric Multi-modal Interactions with Language-gesture Enabled (AI) Embodiment (Kim et al., NAACL 2018)
Copy Citation:
PDF:
https://aclanthology.org/N18-5018.pdf