Probing Task-Oriented Dialogue Representation from Language Models

Chien-Sheng Wu, Caiming Xiong


Abstract
This paper investigates pre-trained language models to find out which model intrinsically carries the most informative representation for task-oriented dialogue tasks. We approach the problem from two aspects: supervised classifier probe and unsupervised mutual information probe. We fine-tune a feed-forward layer as the classifier probe on top of a fixed pre-trained language model with annotated labels in a supervised way. Meanwhile, we propose an unsupervised mutual information probe to evaluate the mutual dependence between a real clustering and a representation clustering. The goals of this empirical paper are to 1) investigate probing techniques, especially from the unsupervised mutual information aspect, 2) provide guidelines of pre-trained language model selection for the dialogue research community, 3) find insights of pre-training factors for dialogue application that may be the key to success.
Anthology ID:
2020.emnlp-main.409
Volume:
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)
Month:
November
Year:
2020
Address:
Online
Editors:
Bonnie Webber, Trevor Cohn, Yulan He, Yang Liu
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
5036–5051
Language:
URL:
https://aclanthology.org/2020.emnlp-main.409
DOI:
10.18653/v1/2020.emnlp-main.409
Bibkey:
Cite (ACL):
Chien-Sheng Wu and Caiming Xiong. 2020. Probing Task-Oriented Dialogue Representation from Language Models. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5036–5051, Online. Association for Computational Linguistics.
Cite (Informal):
Probing Task-Oriented Dialogue Representation from Language Models (Wu & Xiong, EMNLP 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.emnlp-main.409.pdf
Video:
 https://slideslive.com/38938961