Preserving Distributional Information in Dialogue Act Classification

Quan Hung Tran, Ingrid Zukerman, Gholamreza Haffari


Abstract
This paper introduces a novel training/decoding strategy for sequence labeling. Instead of greedily choosing a label at each time step, and using it for the next prediction, we retain the probability distribution over the current label, and pass this distribution to the next prediction. This approach allows us to avoid the effect of label bias and error propagation in sequence learning/decoding. Our experiments on dialogue act classification demonstrate the effectiveness of this approach. Even though our underlying neural network model is relatively simple, it outperforms more complex neural models, achieving state-of-the-art results on the MapTask and Switchboard corpora.
Anthology ID:
D17-1229
Volume:
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
Month:
September
Year:
2017
Address:
Copenhagen, Denmark
Editors:
Martha Palmer, Rebecca Hwa, Sebastian Riedel
Venue:
EMNLP
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
2151–2156
Language:
URL:
https://aclanthology.org/D17-1229
DOI:
10.18653/v1/D17-1229
Bibkey:
Cite (ACL):
Quan Hung Tran, Ingrid Zukerman, and Gholamreza Haffari. 2017. Preserving Distributional Information in Dialogue Act Classification. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2151–2156, Copenhagen, Denmark. Association for Computational Linguistics.
Cite (Informal):
Preserving Distributional Information in Dialogue Act Classification (Tran et al., EMNLP 2017)
Copy Citation:
PDF:
https://aclanthology.org/D17-1229.pdf