Why are Sequence-to-Sequence Models So Dull? Understanding the Low-Diversity Problem of Chatbots

Shaojie Jiang, Maarten de Rijke


Abstract
Diversity is a long-studied topic in information retrieval that usually refers to the requirement that retrieved results should be non-repetitive and cover different aspects. In a conversational setting, an additional dimension of diversity matters: an engaging response generation system should be able to output responses that are diverse and interesting. Sequence-to-sequence (Seq2Seq) models have been shown to be very effective for response generation. However, dialogue responses generated by Seq2Seq models tend to have low diversity. In this paper, we review known sources and existing approaches to this low-diversity problem. We also identify a source of low diversity that has been little studied so far, namely model over-confidence. We sketch several directions for tackling model over-confidence and, hence, the low-diversity problem, including confidence penalties and label smoothing.
Anthology ID:
W18-5712
Volume:
Proceedings of the 2018 EMNLP Workshop SCAI: The 2nd International Workshop on Search-Oriented Conversational AI
Month:
October
Year:
2018
Address:
Brussels, Belgium
Editors:
Aleksandr Chuklin, Jeff Dalton, Julia Kiseleva, Alexey Borisov, Mikhail Burtsev
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
81–86
Language:
URL:
https://aclanthology.org/W18-5712
DOI:
10.18653/v1/W18-5712
Bibkey:
Cite (ACL):
Shaojie Jiang and Maarten de Rijke. 2018. Why are Sequence-to-Sequence Models So Dull? Understanding the Low-Diversity Problem of Chatbots. In Proceedings of the 2018 EMNLP Workshop SCAI: The 2nd International Workshop on Search-Oriented Conversational AI, pages 81–86, Brussels, Belgium. Association for Computational Linguistics.
Cite (Informal):
Why are Sequence-to-Sequence Models So Dull? Understanding the Low-Diversity Problem of Chatbots (Jiang & de Rijke, EMNLP 2018)
Copy Citation:
PDF:
https://aclanthology.org/W18-5712.pdf