Multi-Task Learning using Dynamic Task Weighting for Conversational Question Answering

Sarawoot Kongyoung, Craig Macdonald, Iadh Ounis


Abstract
Conversational Question Answering (ConvQA) is a Conversational Search task in a simplified setting, where an answer must be extracted from a given passage. Neural language models, such as BERT, fine-tuned on large-scale ConvQA datasets such as CoQA and QuAC have been used to address this task. Recently, Multi-Task Learning (MTL) has emerged as a particularly interesting approach for developing ConvQA models, where the objective is to enhance the performance of a primary task by sharing the learned structure across several related auxiliary tasks. However, existing ConvQA models that leverage MTL have not investigated the dynamic adjustment of the relative importance of the different tasks during learning, nor the resulting impact on the performance of the learned models. In this paper, we first study the effectiveness and efficiency of dynamic MTL methods including Evolving Weighting, Uncertainty Weighting, and Loss-Balanced Task Weighting, compared to static MTL methods such as the uniform weighting of tasks. Furthermore, we propose a novel hybrid dynamic method combining Abridged Linear for the main task with a Loss-Balanced Task Weighting (LBTW) for the auxiliary tasks, so as to automatically fine-tune task weighting during learning, ensuring that each of the task’s weights is adjusted by the relative importance of the different tasks. We conduct experiments using QuAC, a large-scale ConvQA dataset. Our results demonstrate the effectiveness of our proposed method, which significantly outperforms both the single-task learning and static task weighting methods with improvements ranging from +2.72% to +3.20% in F1 scores. Finally, our findings show that the performance of using MTL in developing ConvQA model is sensitive to the correct selection of the auxiliary tasks as well as to an adequate balancing of the loss rates of these tasks during training by using LBTW.
Anthology ID:
2020.scai-1.3
Volume:
Proceedings of the 5th International Workshop on Search-Oriented Conversational AI (SCAI)
Month:
November
Year:
2020
Address:
Online
Editors:
Jeff Dalton, Aleksandr Chuklin, Julia Kiseleva, Mikhail Burtsev
Venue:
scai
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
17–26
Language:
URL:
https://aclanthology.org/2020.scai-1.3
DOI:
10.18653/v1/2020.scai-1.3
Bibkey:
Cite (ACL):
Sarawoot Kongyoung, Craig Macdonald, and Iadh Ounis. 2020. Multi-Task Learning using Dynamic Task Weighting for Conversational Question Answering. In Proceedings of the 5th International Workshop on Search-Oriented Conversational AI (SCAI), pages 17–26, Online. Association for Computational Linguistics.
Cite (Informal):
Multi-Task Learning using Dynamic Task Weighting for Conversational Question Answering (Kongyoung et al., scai 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.scai-1.3.pdf
Video:
 https://slideslive.com/38940064
Data
CoQAQuAC