Transductive Auxiliary Task Self-Training for Neural Multi-Task Models

Johannes Bjerva, Katharina Kann, Isabelle Augenstein


Abstract
Multi-task learning and self-training are two common ways to improve a machine learning model’s performance in settings with limited training data. Drawing heavily on ideas from those two approaches, we suggest transductive auxiliary task self-training: training a multi-task model on (i) a combination of main and auxiliary task training data, and (ii) test instances with auxiliary task labels which a single-task version of the model has previously generated. We perform extensive experiments on 86 combinations of languages and tasks. Our results are that, on average, transductive auxiliary task self-training improves absolute accuracy by up to 9.56% over the pure multi-task model for dependency relation tagging and by up to 13.03% for semantic tagging.
Anthology ID:
D19-6128
Volume:
Proceedings of the 2nd Workshop on Deep Learning Approaches for Low-Resource NLP (DeepLo 2019)
Month:
November
Year:
2019
Address:
Hong Kong, China
Editors:
Colin Cherry, Greg Durrett, George Foster, Reza Haffari, Shahram Khadivi, Nanyun Peng, Xiang Ren, Swabha Swayamdipta
Venue:
WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
253–258
Language:
URL:
https://aclanthology.org/D19-6128
DOI:
10.18653/v1/D19-6128
Bibkey:
Cite (ACL):
Johannes Bjerva, Katharina Kann, and Isabelle Augenstein. 2019. Transductive Auxiliary Task Self-Training for Neural Multi-Task Models. In Proceedings of the 2nd Workshop on Deep Learning Approaches for Low-Resource NLP (DeepLo 2019), pages 253–258, Hong Kong, China. Association for Computational Linguistics.
Cite (Informal):
Transductive Auxiliary Task Self-Training for Neural Multi-Task Models (Bjerva et al., 2019)
Copy Citation:
PDF:
https://aclanthology.org/D19-6128.pdf
Data
Universal Dependencies