Re-framing Incremental Deep Language Models for Dialogue Processing with Multi-task Learning

Morteza Rohanian, Julian Hough


Abstract
We present a multi-task learning framework to enable the training of one universal incremental dialogue processing model with four tasks of disfluency detection, language modelling, part-of-speech tagging and utterance segmentation in a simple deep recurrent setting. We show that these tasks provide positive inductive biases to each other with optimal contribution of each one relying on the severity of the noise from the task. Our live multi-task model outperforms similar individual tasks, delivers competitive performance and is beneficial for future use in conversational agents in psychiatric treatment.
Anthology ID:
2020.coling-main.43
Volume:
Proceedings of the 28th International Conference on Computational Linguistics
Month:
December
Year:
2020
Address:
Barcelona, Spain (Online)
Editors:
Donia Scott, Nuria Bel, Chengqing Zong
Venue:
COLING
SIG:
Publisher:
International Committee on Computational Linguistics
Note:
Pages:
497–507
Language:
URL:
https://aclanthology.org/2020.coling-main.43
DOI:
10.18653/v1/2020.coling-main.43
Bibkey:
Cite (ACL):
Morteza Rohanian and Julian Hough. 2020. Re-framing Incremental Deep Language Models for Dialogue Processing with Multi-task Learning. In Proceedings of the 28th International Conference on Computational Linguistics, pages 497–507, Barcelona, Spain (Online). International Committee on Computational Linguistics.
Cite (Informal):
Re-framing Incremental Deep Language Models for Dialogue Processing with Multi-task Learning (Rohanian & Hough, COLING 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.coling-main.43.pdf
Code
 mortezaro/mtl-disfluency-detection