Returning the N to NLP: Towards Contextually Personalized Classification Models

Lucie Flek


Abstract
Most NLP models today treat language as universal, even though socio- and psycholingustic research shows that the communicated message is influenced by the characteristics of the speaker as well as the target audience. This paper surveys the landscape of personalization in natural language processing and related fields, and offers a path forward to mitigate the decades of deviation of the NLP tools from sociolingustic findings, allowing to flexibly process the “natural” language of each user rather than enforcing a uniform NLP treatment. It outlines a possible direction to incorporate these aspects into neural NLP models by means of socially contextual personalization, and proposes to shift the focus of our evaluation strategies accordingly.
Anthology ID:
2020.acl-main.700
Volume:
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Month:
July
Year:
2020
Address:
Online
Editors:
Dan Jurafsky, Joyce Chai, Natalie Schluter, Joel Tetreault
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
7828–7838
Language:
URL:
https://aclanthology.org/2020.acl-main.700
DOI:
10.18653/v1/2020.acl-main.700
Bibkey:
Cite (ACL):
Lucie Flek. 2020. Returning the N to NLP: Towards Contextually Personalized Classification Models. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7828–7838, Online. Association for Computational Linguistics.
Cite (Informal):
Returning the N to NLP: Towards Contextually Personalized Classification Models (Flek, ACL 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.acl-main.700.pdf
Video:
 http://slideslive.com/38929062