Retrieval-Enhanced Adversarial Training for Neural Response Generation

Qingfu Zhu, Lei Cui, Wei-Nan Zhang, Furu Wei, Ting Liu


Abstract
Dialogue systems are usually built on either generation-based or retrieval-based approaches, yet they do not benefit from the advantages of different models. In this paper, we propose a Retrieval-Enhanced Adversarial Training (REAT) method for neural response generation. Distinct from existing approaches, the REAT method leverages an encoder-decoder framework in terms of an adversarial training paradigm, while taking advantage of N-best response candidates from a retrieval-based system to construct the discriminator. An empirical study on a large scale public available benchmark dataset shows that the REAT method significantly outperforms the vanilla Seq2Seq model as well as the conventional adversarial training approach.
Anthology ID:
P19-1366
Volume:
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics
Month:
July
Year:
2019
Address:
Florence, Italy
Editors:
Anna Korhonen, David Traum, Lluís Màrquez
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3763–3773
Language:
URL:
https://aclanthology.org/P19-1366
DOI:
10.18653/v1/P19-1366
Bibkey:
Cite (ACL):
Qingfu Zhu, Lei Cui, Wei-Nan Zhang, Furu Wei, and Ting Liu. 2019. Retrieval-Enhanced Adversarial Training for Neural Response Generation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3763–3773, Florence, Italy. Association for Computational Linguistics.
Cite (Informal):
Retrieval-Enhanced Adversarial Training for Neural Response Generation (Zhu et al., ACL 2019)
Copy Citation:
PDF:
https://aclanthology.org/P19-1366.pdf