An Adversarial Learning Framework For A Persona-Based Multi-Turn Dialogue Model

Oluwatobi Olabiyi, Anish Khazane, Alan Salimov, Erik Mueller


Abstract
In this paper, we extend the persona-based sequence-to-sequence (Seq2Seq) neural network conversation model to a multi-turn dialogue scenario by modifying the state-of-the-art hredGAN architecture to simultaneously capture utterance attributes such as speaker identity, dialogue topic, speaker sentiments and so on. The proposed system, phredGAN has a persona-based HRED generator (PHRED) and a conditional discriminator. We also explore two approaches to accomplish the conditional discriminator: (1) phredGANa, a system that passes the attribute representation as an additional input into a traditional adversarial discriminator, and (2) phredGANd, a dual discriminator system which in addition to the adversarial discriminator, collaboratively predicts the attribute(s) that generated the input utterance. To demonstrate the superior performance of phredGAN over the persona Seq2Seq model, we experiment with two conversational datasets, the Ubuntu Dialogue Corpus (UDC) and TV series transcripts from the Big Bang Theory and Friends. Performance comparison is made with respect to a variety of quantitative measures as well as crowd-sourced human evaluation. We also explore the trade-offs from using either variant of phredGAN on datasets with many but weak attribute modalities (such as with Big Bang Theory and Friends) and ones with few but strong attribute modalities (customer-agent interactions in Ubuntu dataset).
Anthology ID:
W19-2301
Volume:
Proceedings of the Workshop on Methods for Optimizing and Evaluating Neural Language Generation
Month:
June
Year:
2019
Address:
Minneapolis, Minnesota
Editors:
Antoine Bosselut, Asli Celikyilmaz, Marjan Ghazvininejad, Srinivasan Iyer, Urvashi Khandelwal, Hannah Rashkin, Thomas Wolf
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1–10
Language:
URL:
https://aclanthology.org/W19-2301
DOI:
10.18653/v1/W19-2301
Bibkey:
Cite (ACL):
Oluwatobi Olabiyi, Anish Khazane, Alan Salimov, and Erik Mueller. 2019. An Adversarial Learning Framework For A Persona-Based Multi-Turn Dialogue Model. In Proceedings of the Workshop on Methods for Optimizing and Evaluating Neural Language Generation, pages 1–10, Minneapolis, Minnesota. Association for Computational Linguistics.
Cite (Informal):
An Adversarial Learning Framework For A Persona-Based Multi-Turn Dialogue Model (Olabiyi et al., NAACL 2019)
Copy Citation:
PDF:
https://aclanthology.org/W19-2301.pdf