Neural Dialogue State Tracking with Temporally Expressive Networks

Junfan Chen, Richong Zhang, Yongyi Mao, Jie Xu


Abstract
Dialogue state tracking (DST) is an important part of a spoken dialogue system. Existing DST models either ignore temporal feature dependencies across dialogue turns or fail to explicitly model temporal state dependencies in a dialogue. In this work, we propose Temporally Expressive Networks (TEN) to jointly model the two types of temporal dependencies in DST. The TEN model utilizes the power of recurrent networks and probabilistic graphical models. Evaluating on standard datasets, TEN is demonstrated to improve the accuracy of turn-level-state prediction and the state aggregation.
Anthology ID:
2020.findings-emnlp.142
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2020
Month:
November
Year:
2020
Address:
Online
Editors:
Trevor Cohn, Yulan He, Yang Liu
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1570–1579
Language:
URL:
https://aclanthology.org/2020.findings-emnlp.142
DOI:
10.18653/v1/2020.findings-emnlp.142
Bibkey:
Cite (ACL):
Junfan Chen, Richong Zhang, Yongyi Mao, and Jie Xu. 2020. Neural Dialogue State Tracking with Temporally Expressive Networks. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 1570–1579, Online. Association for Computational Linguistics.
Cite (Informal):
Neural Dialogue State Tracking with Temporally Expressive Networks (Chen et al., Findings 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.findings-emnlp.142.pdf
Code
 BDBC-KG-NLP/TEN_EMNLP2020
Data
MultiWOZWizard-of-Oz