“You are grounded!”: Latent Name Artifacts in Pre-trained Language Models

Vered Shwartz, Rachel Rudinger, Oyvind Tafjord


Abstract
Pre-trained language models (LMs) may perpetuate biases originating in their training corpus to downstream models. We focus on artifacts associated with the representation of given names (e.g., Donald), which, depending on the corpus, may be associated with specific entities, as indicated by next token prediction (e.g., Trump). While helpful in some contexts, grounding happens also in under-specified or inappropriate contexts. For example, endings generated for ‘Donald is a’ substantially differ from those of other names, and often have more-than-average negative sentiment. We demonstrate the potential effect on downstream tasks with reading comprehension probes where name perturbation changes the model answers. As a silver lining, our experiments suggest that additional pre-training on different corpora may mitigate this bias.
Anthology ID:
2020.emnlp-main.556
Volume:
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)
Month:
November
Year:
2020
Address:
Online
Editors:
Bonnie Webber, Trevor Cohn, Yulan He, Yang Liu
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
6850–6861
Language:
URL:
https://aclanthology.org/2020.emnlp-main.556
DOI:
10.18653/v1/2020.emnlp-main.556
Bibkey:
Cite (ACL):
Vered Shwartz, Rachel Rudinger, and Oyvind Tafjord. 2020. “You are grounded!”: Latent Name Artifacts in Pre-trained Language Models. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6850–6861, Online. Association for Computational Linguistics.
Cite (Informal):
“You are grounded!”: Latent Name Artifacts in Pre-trained Language Models (Shwartz et al., EMNLP 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.emnlp-main.556.pdf
Video:
 https://slideslive.com/38938640