Learning Word Relatedness over Time

Guy D. Rosin, Eytan Adar, Kira Radinsky


Abstract
Search systems are often focused on providing relevant results for the “now”, assuming both corpora and user needs that focus on the present. However, many corpora today reflect significant longitudinal collections ranging from 20 years of the Web to hundreds of years of digitized newspapers and books. Understanding the temporal intent of the user and retrieving the most relevant historical content has become a significant challenge. Common search features, such as query expansion, leverage the relationship between terms but cannot function well across all times when relationships vary temporally. In this work, we introduce a temporal relationship model that is extracted from longitudinal data collections. The model supports the task of identifying, given two words, when they relate to each other. We present an algorithmic framework for this task and show its application for the task of query expansion, achieving high gain.
Anthology ID:
D17-1121
Volume:
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
Month:
September
Year:
2017
Address:
Copenhagen, Denmark
Editors:
Martha Palmer, Rebecca Hwa, Sebastian Riedel
Venue:
EMNLP
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
1168–1178
Language:
URL:
https://aclanthology.org/D17-1121
DOI:
10.18653/v1/D17-1121
Bibkey:
Cite (ACL):
Guy D. Rosin, Eytan Adar, and Kira Radinsky. 2017. Learning Word Relatedness over Time. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1168–1178, Copenhagen, Denmark. Association for Computational Linguistics.
Cite (Informal):
Learning Word Relatedness over Time (Rosin et al., EMNLP 2017)
Copy Citation:
PDF:
https://aclanthology.org/D17-1121.pdf
Attachment:
 D17-1121.Attachment.zip
Code
 guyrosin/learning-word-relatedness