A Knowledge-Grounded Multimodal Search-Based Conversational Agent

Shubham Agarwal, Ondřej Dušek, Ioannis Konstas, Verena Rieser


Abstract
Multimodal search-based dialogue is a challenging new task: It extends visually grounded question answering systems into multi-turn conversations with access to an external database. We address this new challenge by learning a neural response generation system from the recently released Multimodal Dialogue (MMD) dataset (Saha et al., 2017). We introduce a knowledge-grounded multimodal conversational model where an encoded knowledge base (KB) representation is appended to the decoder input. Our model substantially outperforms strong baselines in terms of text-based similarity measures (over 9 BLEU points, 3 of which are solely due to the use of additional information from the KB).
Anthology ID:
W18-5709
Volume:
Proceedings of the 2018 EMNLP Workshop SCAI: The 2nd International Workshop on Search-Oriented Conversational AI
Month:
October
Year:
2018
Address:
Brussels, Belgium
Editors:
Aleksandr Chuklin, Jeff Dalton, Julia Kiseleva, Alexey Borisov, Mikhail Burtsev
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
59–66
Language:
URL:
https://aclanthology.org/W18-5709
DOI:
10.18653/v1/W18-5709
Bibkey:
Cite (ACL):
Shubham Agarwal, Ondřej Dušek, Ioannis Konstas, and Verena Rieser. 2018. A Knowledge-Grounded Multimodal Search-Based Conversational Agent. In Proceedings of the 2018 EMNLP Workshop SCAI: The 2nd International Workshop on Search-Oriented Conversational AI, pages 59–66, Brussels, Belgium. Association for Computational Linguistics.
Cite (Informal):
A Knowledge-Grounded Multimodal Search-Based Conversational Agent (Agarwal et al., EMNLP 2018)
Copy Citation:
PDF:
https://aclanthology.org/W18-5709.pdf
Code
 shubhamagarwal92/mmd