Exploring Numeracy in Word Embeddings

Aakanksha Naik, Abhilasha Ravichander, Carolyn Rose, Eduard Hovy


Abstract
Word embeddings are now pervasive across NLP subfields as the de-facto method of forming text representataions. In this work, we show that existing embedding models are inadequate at constructing representations that capture salient aspects of mathematical meaning for numbers, which is important for language understanding. Numbers are ubiquitous and frequently appear in text. Inspired by cognitive studies on how humans perceive numbers, we develop an analysis framework to test how well word embeddings capture two essential properties of numbers: magnitude (e.g. 3<4) and numeration (e.g. 3=three). Our experiments reveal that most models capture an approximate notion of magnitude, but are inadequate at capturing numeration. We hope that our observations provide a starting point for the development of methods which better capture numeracy in NLP systems.
Anthology ID:
P19-1329
Volume:
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics
Month:
July
Year:
2019
Address:
Florence, Italy
Editors:
Anna Korhonen, David Traum, Lluís Màrquez
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3374–3380
Language:
URL:
https://aclanthology.org/P19-1329
DOI:
10.18653/v1/P19-1329
Bibkey:
Cite (ACL):
Aakanksha Naik, Abhilasha Ravichander, Carolyn Rose, and Eduard Hovy. 2019. Exploring Numeracy in Word Embeddings. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3374–3380, Florence, Italy. Association for Computational Linguistics.
Cite (Informal):
Exploring Numeracy in Word Embeddings (Naik et al., ACL 2019)
Copy Citation:
PDF:
https://aclanthology.org/P19-1329.pdf