Continual Few-Shot Learning for Text Classification

Ramakanth Pasunuru, Veselin Stoyanov, Mohit Bansal


Abstract
Natural Language Processing (NLP) is increasingly relying on general end-to-end systems that need to handle many different linguistic phenomena and nuances. For example, a Natural Language Inference (NLI) system has to recognize sentiment, handle numbers, perform coreference, etc. Our solutions to complex problems are still far from perfect, so it is important to create systems that can learn to correct mistakes quickly, incrementally, and with little training data. In this work, we propose a continual few-shot learning (CFL) task, in which a system is challenged with a difficult phenomenon and asked to learn to correct mistakes with only a few (10 to 15) training examples. To this end, we first create benchmarks based on previously annotated data: two NLI (ANLI and SNLI) and one sentiment analysis (IMDB) datasets. Next, we present various baselines from diverse paradigms (e.g., memory-aware synapses and Prototypical networks) and compare them on few-shot learning and continual few-shot learning setups. Our contributions are in creating a benchmark suite and evaluation protocol for continual few-shot learning on the text classification tasks, and making several interesting observations on the behavior of similarity-based methods. We hope that our work serves as a useful starting point for future work on this important topic.
Anthology ID:
2021.emnlp-main.460
Volume:
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2021
Address:
Online and Punta Cana, Dominican Republic
Editors:
Marie-Francine Moens, Xuanjing Huang, Lucia Specia, Scott Wen-tau Yih
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
5688–5702
Language:
URL:
https://aclanthology.org/2021.emnlp-main.460
DOI:
10.18653/v1/2021.emnlp-main.460
Bibkey:
Cite (ACL):
Ramakanth Pasunuru, Veselin Stoyanov, and Mohit Bansal. 2021. Continual Few-Shot Learning for Text Classification. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 5688–5702, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Cite (Informal):
Continual Few-Shot Learning for Text Classification (Pasunuru et al., EMNLP 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.emnlp-main.460.pdf
Video:
 https://aclanthology.org/2021.emnlp-main.460.mp4
Code
 ramakanth-pasunuru/cfl-benchmark
Data
ANLIIMDb Movie ReviewsMultiNLISNLI