Lifelong Machine Learning for Natural Language Processing

Zhiyuan Chen, Bing Liu


Abstract
Machine learning (ML) has been successfully used as a prevalent approach to solving numerous NLP problems. However, the classic ML paradigm learns in isolation. That is, given a dataset, an ML algorithm is executed on the dataset to produce a model without using any related or prior knowledge. Although this type of isolated learning is very useful, it also has serious limitations as it does not accumulate knowledge learned in the past and use the knowledge to help future learning, which is the hallmark of human learning and human intelligence. Lifelong machine learning (LML) aims to achieve this capability. Specifically, it aims to design and develop computational learning systems and algorithms that learn as humans do, i.e., retaining the results learned in the past, abstracting knowledge from them, and using the knowledge to help future learning. In this tutorial, we will introduce the existing research of LML and to show that LML is very suitable for NLP tasks and has potential to help NLP make major progresses.
Anthology ID:
D16-2003
Volume:
Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing: Tutorial Abstracts
Month:
November
Year:
2016
Address:
Austin, Texas
Editors:
Bishan Yang, Rebecca Hwa
Venue:
EMNLP
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
Language:
URL:
https://aclanthology.org/D16-2003
DOI:
Bibkey:
Cite (ACL):
Zhiyuan Chen and Bing Liu. 2016. Lifelong Machine Learning for Natural Language Processing. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing: Tutorial Abstracts, Austin, Texas. Association for Computational Linguistics.
Cite (Informal):
Lifelong Machine Learning for Natural Language Processing (Chen & Liu, EMNLP 2016)
Copy Citation: