Combining Unsupervised Pre-training and Annotator Rationales to Improve Low-shot Text Classification

Oren Melamud, Mihaela Bornea, Ken Barker


Abstract
Supervised learning models often perform poorly at low-shot tasks, i.e. tasks for which little labeled data is available for training. One prominent approach for improving low-shot learning is to use unsupervised pre-trained neural models. Another approach is to obtain richer supervision by collecting annotator rationales (explanations supporting label annotations). In this work, we combine these two approaches to improve low-shot text classification with two novel methods: a simple bag-of-words embedding approach; and a more complex context-aware method, based on the BERT model. In experiments with two English text classification datasets, we demonstrate substantial performance gains from combining pre-training with rationales. Furthermore, our investigation of a range of train-set sizes reveals that the simple bag-of-words approach is the clear top performer when there are only a few dozen training instances or less, while more complex models, such as BERT or CNN, require more training data to shine.
Anthology ID:
D19-1401
Volume:
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)
Month:
November
Year:
2019
Address:
Hong Kong, China
Editors:
Kentaro Inui, Jing Jiang, Vincent Ng, Xiaojun Wan
Venues:
EMNLP | IJCNLP
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
3884–3893
Language:
URL:
https://aclanthology.org/D19-1401
DOI:
10.18653/v1/D19-1401
Bibkey:
Cite (ACL):
Oren Melamud, Mihaela Bornea, and Ken Barker. 2019. Combining Unsupervised Pre-training and Annotator Rationales to Improve Low-shot Text Classification. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3884–3893, Hong Kong, China. Association for Computational Linguistics.
Cite (Informal):
Combining Unsupervised Pre-training and Annotator Rationales to Improve Low-shot Text Classification (Melamud et al., EMNLP-IJCNLP 2019)
Copy Citation:
PDF:
https://aclanthology.org/D19-1401.pdf