Using Word Embeddings for Unsupervised Acronym Disambiguation

Jean Charbonnier, Christian Wartena


Abstract
Scientific papers from all disciplines contain many abbreviations and acronyms. In many cases these acronyms are ambiguous. We present a method to choose the contextual correct definition of an acronym that does not require training for each acronym and thus can be applied to a large number of different acronyms with only few instances. We constructed a set of 19,954 examples of 4,365 ambiguous acronyms from image captions in scientific papers along with their contextually correct definition from different domains. We learn word embeddings for all words in the corpus and compare the averaged context vector of the words in the expansion of an acronym with the weighted average vector of the words in the context of the acronym. We show that this method clearly outperforms (classical) cosine similarity. Furthermore, we show that word embeddings learned from a 1 billion word corpus of scientific texts outperform word embeddings learned on much large general corpora.
Anthology ID:
C18-1221
Volume:
Proceedings of the 27th International Conference on Computational Linguistics
Month:
August
Year:
2018
Address:
Santa Fe, New Mexico, USA
Editors:
Emily M. Bender, Leon Derczynski, Pierre Isabelle
Venue:
COLING
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2610–2619
Language:
URL:
https://aclanthology.org/C18-1221
DOI:
Bibkey:
Cite (ACL):
Jean Charbonnier and Christian Wartena. 2018. Using Word Embeddings for Unsupervised Acronym Disambiguation. In Proceedings of the 27th International Conference on Computational Linguistics, pages 2610–2619, Santa Fe, New Mexico, USA. Association for Computational Linguistics.
Cite (Informal):
Using Word Embeddings for Unsupervised Acronym Disambiguation (Charbonnier & Wartena, COLING 2018)
Copy Citation:
PDF:
https://aclanthology.org/C18-1221.pdf