Designing Precise and Robust Dialogue Response Evaluators

Tianyu Zhao, Divesh Lala, Tatsuya Kawahara


Abstract
Automatic dialogue response evaluator has been proposed as an alternative to automated metrics and human evaluation. However, existing automatic evaluators achieve only moderate correlation with human judgement and they are not robust. In this work, we propose to build a reference-free evaluator and exploit the power of semi-supervised training and pretrained (masked) language models. Experimental results demonstrate that the proposed evaluator achieves a strong correlation (> 0.6) with human judgement and generalizes robustly to diverse responses and corpora. We open-source the code and data in https://github.com/ZHAOTING/dialog-processing.
Anthology ID:
2020.acl-main.4
Volume:
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Month:
July
Year:
2020
Address:
Online
Editors:
Dan Jurafsky, Joyce Chai, Natalie Schluter, Joel Tetreault
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
26–33
Language:
URL:
https://aclanthology.org/2020.acl-main.4
DOI:
10.18653/v1/2020.acl-main.4
Bibkey:
Cite (ACL):
Tianyu Zhao, Divesh Lala, and Tatsuya Kawahara. 2020. Designing Precise and Robust Dialogue Response Evaluators. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 26–33, Online. Association for Computational Linguistics.
Cite (Informal):
Designing Precise and Robust Dialogue Response Evaluators (Zhao et al., ACL 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.acl-main.4.pdf
Video:
 http://slideslive.com/38928816
Code
 ZHAOTING/dialog-processing
Data
DailyDialog