Fast and Scalable Dialogue State Tracking with Explicit Modular Decomposition

Dingmin Wang, Chenghua Lin, Qi Liu, Kam-Fai Wong


Abstract
We present a fast and scalable architecture called Explicit Modular Decomposition (EMD), in which we incorporate both classification-based and extraction-based methods and design four modules (for clas- sification and sequence labelling) to jointly extract dialogue states. Experimental results based on the MultiWoz 2.0 dataset validates the superiority of our proposed model in terms of both complexity and scalability when compared to the state-of-the-art methods, especially in the scenario of multi-domain dialogues entangled with many turns of utterances.
Anthology ID:
2021.naacl-main.27
Volume:
Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
Month:
June
Year:
2021
Address:
Online
Editors:
Kristina Toutanova, Anna Rumshisky, Luke Zettlemoyer, Dilek Hakkani-Tur, Iz Beltagy, Steven Bethard, Ryan Cotterell, Tanmoy Chakraborty, Yichao Zhou
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
289–295
Language:
URL:
https://aclanthology.org/2021.naacl-main.27
DOI:
10.18653/v1/2021.naacl-main.27
Bibkey:
Cite (ACL):
Dingmin Wang, Chenghua Lin, Qi Liu, and Kam-Fai Wong. 2021. Fast and Scalable Dialogue State Tracking with Explicit Modular Decomposition. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 289–295, Online. Association for Computational Linguistics.
Cite (Informal):
Fast and Scalable Dialogue State Tracking with Explicit Modular Decomposition (Wang et al., NAACL 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.naacl-main.27.pdf
Video:
 https://aclanthology.org/2021.naacl-main.27.mp4