Is Your Goal-Oriented Dialog Model Performing Really Well? Empirical Analysis of System-wise Evaluation

Ryuichi Takanobu, Qi Zhu, Jinchao Li, Baolin Peng, Jianfeng Gao, Minlie Huang


Abstract
There is a growing interest in developing goal-oriented dialog systems which serve users in accomplishing complex tasks through multi-turn conversations. Although many methods are devised to evaluate and improve the performance of individual dialog components, there is a lack of comprehensive empirical study on how different components contribute to the overall performance of a dialog system. In this paper, we perform a system-wise evaluation and present an empirical analysis on different types of dialog systems which are composed of different modules in different settings. Our results show that (1) a pipeline dialog system trained using fine-grained supervision signals at different component levels often obtains better performance than the systems that use joint or end-to-end models trained on coarse-grained labels, (2) component-wise, single-turn evaluation results are not always consistent with the overall performance of a dialog system, and (3) despite the discrepancy between simulators and human users, simulated evaluation is still a valid alternative to the costly human evaluation especially in the early stage of development.
Anthology ID:
2020.sigdial-1.37
Volume:
Proceedings of the 21th Annual Meeting of the Special Interest Group on Discourse and Dialogue
Month:
July
Year:
2020
Address:
1st virtual meeting
Editors:
Olivier Pietquin, Smaranda Muresan, Vivian Chen, Casey Kennington, David Vandyke, Nina Dethlefs, Koji Inoue, Erik Ekstedt, Stefan Ultes
Venue:
SIGDIAL
SIG:
SIGDIAL
Publisher:
Association for Computational Linguistics
Note:
Pages:
297–310
Language:
URL:
https://aclanthology.org/2020.sigdial-1.37
DOI:
10.18653/v1/2020.sigdial-1.37
Bibkey:
Cite (ACL):
Ryuichi Takanobu, Qi Zhu, Jinchao Li, Baolin Peng, Jianfeng Gao, and Minlie Huang. 2020. Is Your Goal-Oriented Dialog Model Performing Really Well? Empirical Analysis of System-wise Evaluation. In Proceedings of the 21th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 297–310, 1st virtual meeting. Association for Computational Linguistics.
Cite (Informal):
Is Your Goal-Oriented Dialog Model Performing Really Well? Empirical Analysis of System-wise Evaluation (Takanobu et al., SIGDIAL 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.sigdial-1.37.pdf
Video:
 https://youtube.com/watch?v=1Xfqq0mt1X8