TSDG: Content-aware Neural Response Generation with Two-stage Decoding Process

Junsheng Kong, Zhicheng Zhong, Yi Cai, Xin Wu, Da Ren


Abstract
Neural response generative models have achieved remarkable progress in recent years but tend to yield irrelevant and uninformative responses. One of the reasons is that encoder-decoder based models always use a single decoder to generate a complete response at a stroke. This tends to generate high-frequency function words with less semantic information rather than low-frequency content words with more semantic information. To address this issue, we propose a content-aware model with two-stage decoding process named Two-stage Dialogue Generation (TSDG). We separate the decoding process of content words and function words so that content words can be generated independently without the interference of function words. Experimental results on two datasets indicate that our model significantly outperforms several competitive generative models in terms of automatic and human evaluation.
Anthology ID:
2020.findings-emnlp.192
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2020
Month:
November
Year:
2020
Address:
Online
Editors:
Trevor Cohn, Yulan He, Yang Liu
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2121–2126
Language:
URL:
https://aclanthology.org/2020.findings-emnlp.192
DOI:
10.18653/v1/2020.findings-emnlp.192
Bibkey:
Cite (ACL):
Junsheng Kong, Zhicheng Zhong, Yi Cai, Xin Wu, and Da Ren. 2020. TSDG: Content-aware Neural Response Generation with Two-stage Decoding Process. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 2121–2126, Online. Association for Computational Linguistics.
Cite (Informal):
TSDG: Content-aware Neural Response Generation with Two-stage Decoding Process (Kong et al., Findings 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.findings-emnlp.192.pdf
Video:
 https://slideslive.com/38940706