An Empirical Study of the Downstream Reliability of Pre-Trained Word Embeddings

Anthony Rios, Brandon Lwowski


Abstract
While pre-trained word embeddings have been shown to improve the performance of downstream tasks, many questions remain regarding their reliability: Do the same pre-trained word embeddings result in the best performance with slight changes to the training data? Do the same pre-trained embeddings perform well with multiple neural network architectures? Do imputation strategies for unknown words impact reliability? In this paper, we introduce two new metrics to understand the downstream reliability of word embeddings. We find that downstream reliability of word embeddings depends on multiple factors, including, the evaluation metric, the handling of out-of-vocabulary words, and whether the embeddings are fine-tuned.
Anthology ID:
2020.coling-main.299
Volume:
Proceedings of the 28th International Conference on Computational Linguistics
Month:
December
Year:
2020
Address:
Barcelona, Spain (Online)
Editors:
Donia Scott, Nuria Bel, Chengqing Zong
Venue:
COLING
SIG:
Publisher:
International Committee on Computational Linguistics
Note:
Pages:
3371–3388
Language:
URL:
https://aclanthology.org/2020.coling-main.299
DOI:
10.18653/v1/2020.coling-main.299
Bibkey:
Cite (ACL):
Anthony Rios and Brandon Lwowski. 2020. An Empirical Study of the Downstream Reliability of Pre-Trained Word Embeddings. In Proceedings of the 28th International Conference on Computational Linguistics, pages 3371–3388, Barcelona, Spain (Online). International Committee on Computational Linguistics.
Cite (Informal):
An Empirical Study of the Downstream Reliability of Pre-Trained Word Embeddings (Rios & Lwowski, COLING 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.coling-main.299.pdf
Data
OLID