Human-like informative conversations: Better acknowledgements using conditional mutual information

Ashwin Paranjape, Christopher Manning


Abstract
This work aims to build a dialogue agent that can weave new factual content into conversations as naturally as humans. We draw insights from linguistic principles of conversational analysis and annotate human-human conversations from the Switchboard Dialog Act Corpus to examine humans strategies for acknowledgement, transition, detail selection and presentation. When current chatbots (explicitly provided with new factual content) introduce facts into a conversation, their generated responses do not acknowledge the prior turns. This is because models trained with two contexts - new factual content and conversational history - generate responses that are non-specific w.r.t. one of the contexts, typically the conversational history. We show that specificity w.r.t. conversational history is better captured by pointwise conditional mutual information (pcmi_h) than by the established use of pointwise mutual information (pmi). Our proposed method, Fused-PCMI, trades off pmi for pcmi_h and is preferred by humans for overall quality over the Max-PMI baseline 60% of the time. Human evaluators also judge responses with higher pcmi_h better at acknowledgement 74% of the time. The results demonstrate that systems mimicking human conversational traits (in this case acknowledgement) improve overall quality and more broadly illustrate the utility of linguistic principles in improving dialogue agents.
Anthology ID:
2021.naacl-main.61
Volume:
Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
Month:
June
Year:
2021
Address:
Online
Editors:
Kristina Toutanova, Anna Rumshisky, Luke Zettlemoyer, Dilek Hakkani-Tur, Iz Beltagy, Steven Bethard, Ryan Cotterell, Tanmoy Chakraborty, Yichao Zhou
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
768–781
Language:
URL:
https://aclanthology.org/2021.naacl-main.61
DOI:
10.18653/v1/2021.naacl-main.61
Bibkey:
Cite (ACL):
Ashwin Paranjape and Christopher Manning. 2021. Human-like informative conversations: Better acknowledgements using conditional mutual information. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 768–781, Online. Association for Computational Linguistics.
Cite (Informal):
Human-like informative conversations: Better acknowledgements using conditional mutual information (Paranjape & Manning, NAACL 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.naacl-main.61.pdf
Video:
 https://aclanthology.org/2021.naacl-main.61.mp4
Code
 AshwinParanjape/human-like-informative-conversations