Out-of-Sample Representation Learning for Knowledge Graphs

Marjan Albooyeh, Rishab Goel, Seyed Mehran Kazemi


Abstract
Many important problems can be formulated as reasoning in knowledge graphs. Representation learning has proved extremely effective for transductive reasoning, in which one needs to make new predictions for already observed entities. This is true for both attributed graphs(where each entity has an initial feature vector) and non-attributed graphs (where the only initial information derives from known relations with other entities). For out-of-sample reasoning, where one needs to make predictions for entities that were unseen at training time, much prior work considers attributed graph. However, this problem is surprisingly under-explored for non-attributed graphs. In this paper, we study the out-of-sample representation learning problem for non-attributed knowledge graphs, create benchmark datasets for this task, develop several models and baselines, and provide empirical analyses and comparisons of the proposed models and baselines.
Anthology ID:
2020.findings-emnlp.241
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2020
Month:
November
Year:
2020
Address:
Online
Editors:
Trevor Cohn, Yulan He, Yang Liu
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2657–2666
Language:
URL:
https://aclanthology.org/2020.findings-emnlp.241
DOI:
10.18653/v1/2020.findings-emnlp.241
Bibkey:
Cite (ACL):
Marjan Albooyeh, Rishab Goel, and Seyed Mehran Kazemi. 2020. Out-of-Sample Representation Learning for Knowledge Graphs. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 2657–2666, Online. Association for Computational Linguistics.
Cite (Informal):
Out-of-Sample Representation Learning for Knowledge Graphs (Albooyeh et al., Findings 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.findings-emnlp.241.pdf
Video:
 https://slideslive.com/38940171
Data
FB15k-237