Differentiating Concepts and Instances for Knowledge Graph Embedding

Concepts, which represent a group of different instances sharing common properties, are essential information in knowledge representation. Most conventional knowledge embedding methods encode both entities (concepts and instances) and relations as vectors in a low dimensional semantic space equally, ignoring the difference between concepts and instances. In this paper, we propose a novel knowledge graph embedding model named TransC by differentiating concepts and instances. Specifically, TransC encodes each concept in knowledge graph as a sphere and each instance as a vector in the same semantic space. We use the relative positions to model the relations between concepts and instances (i.e.,instanceOf), and the relations between concepts and sub-concepts (i.e., subClassOf). We evaluate our model on both link prediction and triple classification tasks on the dataset based on YAGO. Experimental results show that TransC outperforms state-of-the-art methods, and captures the semantic transitivity for instanceOf and subClassOf relation. Our codes and datasets can be obtained from https://github.com/davidlvxin/TransC.


Introduction
Knowledge graphs (KGs) aim at semantically representing the world's truth in the form of machinereadable graphs composed of triple facts. Knowledge graph embedding encodes each element (entities and relations) in knowledge graph into a continuous low-dimensional vector space. The learned representations make the knowledge graph essentially computable and have been proved to be helpful for knowledge graph completion and information extraction (Bordes et al., 2013;Wang et al., 2014;Lin et al., 2015b;Ji et al., , 2016. In recent years, various knowledge graph embedding methods have been proposed, among which the translation-based models are simple and effective with good performances. Inspired by word2vec (Mikolov et al., 2013), given a triple (h, r, t), TransE learns vector embeddings h, r and t which satisfy r ≈ t − h. Afterwards, TransH (Wang et al., 2014), TransR/CTransR (Lin et al., 2015b) and TransD , etc are proposed to address the problem of TransE when modeling 1-to-N, N-to-1, and N-to-N relations. As extensions of RESCAL (Nickel et al., 2011), which is a bilinear model, HolE (Nickel et al., 2016), DistMult (Yang et al., 2014) and ComplEx (Trouillon et al., 2016) achieve the stateof-the-art performances. Meanwhile, there are also some different methods using a variety of external information such as entity types (Xie et al., 2016), textual descriptions , as well as logical rules to strengthen representations of knowledge graphs (Wang et al., 2015;Guo et al., 2016;Rocktäschel et al., 2015).
However, all these methods ignore to distinguish between concepts and instances, and regard both as entities to make a simplification. Actually, concepts and instances are organized differently in many real world datasets like YAGO (Suchanek et al., 2007), Freebase (Bollacker et al., 2008), and WordNet (Miller, 1995). Hierarchical concepts in these knowledge bases provide a natural way to categorize and locate instances. Therefore, the common simplification in previous work will lead to the following two drawbacks: Insufficient concept representation: Concepts are essential information in knowledge graph. A concept is a fundamental category of existence (Rosch, 1973) and can be reified by all of its actual or potential instances. Figure 1 presents an example of concepts and instances about university staffs. Most knowledge embedding methods encode both concepts and instances as vectors, cannot explicitly represent the difference between concepts and instances.
Lack transitivity of both isA relations: instanceOf and subClassOf (generally known as isA) are two special relations in knowledge graph. Different from most other relations, isA relations exhibit transitivity, e.g., the dotted lines in Figure 1 represent the facts inferred by isA transitivity. The indiscriminate vector representation for all relations in previous work cannot reserve this property well (see Section 5.3 for details).
To address these issues, we propose a novel translation embedding model named TransC in this paper. Inspired by (Tenenbaum et al., 2011), concepts in people's mind are organized hierarchically and instances should be close to concepts that they belong to. Hence in TransC, each concept is encoded as a sphere and each instance as a vector in the same semantic space, and relative positions are employed to model the relations between concepts and instances. More specifically, instanceOf relation is naturally represented by checking whether an instance vector is inside a concept sphere. For the subClassOf relation, we enumerate and quantify four possible relative positions between two concept spheres. We also define loss functions to measure the relative positions and optimize knowledge graph embeddings. Finally, we incorporate them into translationbased models to jointly learn the knowledge representations of concepts, instances and relations.
Experiments on real world datasets extracted from YAGO show that TransC outperforms previous work like TransE, TransD, HolE, DistMult and ComplEx in most cases. The contributions of this paper can be summarized as follows: 1. To the best of our knowledge, we are the first to propose and formalize the problem of knowledge graph embedding which differentiates between concepts and instances.
2. We propose a novel knowledge embedding method named TransC, which distinguishes between concepts and instances and deals with the transitivity of isA relations.
3. We construct a new dataset based on YAGO for evaluation. Experiments on link prediction and triple classification demonstrate that TransC successfully addresses the above problems and outperforms state-of-the-art methods.

Related Work
There are a variety of models for knowledge graph embedding. We divide them into three kinds and introduce them respectively.

Translation-based Models
TransE (Bordes et al., 2013) regards a relation r as a translation from h to t for a triple (h, r, t) in training set. The vector embeddings of this triple should satisfy h + r ≈ t. Hence, t should be the nearest neighbor of h + r, and the loss function is TransE is suitable for 1-to-1 relations, but it has problems when handling 1-to-N, N-to-1, and Nto-N relations.
TransH (Wang et al., 2014) attempts to alleviate the problems of TransE above. It regards a relation vector r as a translation on a hyperplane with w r as the normal vector. The vector embeddings will be first projected to the hyperplane of relation r and get h ⊥ = h − w r hw r and t ⊥ = t − w r tw r . The loss function of TransH is TransR/CTransR (Lin et al., 2015b) addresses the issue in TransE and TransH that some entities are similar in the entity space but comparably different in other specific aspects. It sets a transfer matrix M r for each relation r to map entity embedding to relation vector space. Its loss function is TransD  considers the different types of entities and relations at the same time.
Each relation-entity pair (r, e) will have a mapping matrix M re to map entity embedding into relation vector space. And the projected vectors could be defined as h ⊥ = M rh h and t ⊥ = M rt t.
The loss function of TransD is There are many other translation-based models in recent years. For example, TranSparse (Ji et al., 2016) simplifies TransR by enforcing the sparseness on the projection matrix, PTransE (Lin et al., 2015a) considers relation paths as translations between entities for representation learning, (Xiao et al., 2016a) proposes a manifold-based embedding principle (ManifoldE) for precise link prediction, TransF (Feng et al., 2016) regards relation as translation between head entity vector and tail entity vector with flexible magnitude, (Xiao et al., 2016b) proposes a new generative model TransG, and KG2E  uses Gaussian embedding to model the data uncertainty. All these models can be seen in (Wang et al.).

Bilinear Models
RESCAL (Nickel et al., 2011) is the first bilinear model. It associates each entity with a vector to capture its latent semantics. Each relation is represented as a matrix which models pairwise interactions between latent factors.
Many extensions of RESCAL have been proposed by restricting bilinear functions in recent years. For example, DistMult (Yang et al., 2014) simplifies RESCAL by restricting the matrices representing relations to diagonal matrices. HolE (Nickel et al., 2016) combines the expressive power of RESCAL with the efficiency and simplicity of DistMult. It represents both entities and relations as vectors in R d . ComplEx (Trouillon et al., 2016) extends DistMult by introducing complex-valued embeddings so as to better model asymmetric relations.

External Information Learning Models
External information like textual information is significant for knowledge representation. TEKE  uses external context information in a text corpus to represent both entities and words into a joint vector space with alignment models. DKRL (Xie et al., 2016) directly learns entity representations from entity descriptions. (Wang et al., 2015;Guo et al., 2016;Rocktäschel et al., 2015) use logical rules to strengthen representations of knowledge graphs.
All models above do not differentiate between concepts and instances. To the best of our knowledge, our proposed TransC is the first attempt which represents concepts, instances, and relations differently in the same space.

Problem Formulation
In this section, we formulate the problem of knowledge graph embedding with concepts and instances. Before that, we first introduce the input knowledge graph.
Knowledge Graph KG describes concepts, instances, and the relations between them. It can be formalized as KG = {C, I, R, S}. C and I denote the sets of concepts and instances respectively. Relation set R can be formalized as R = {r e , r c } ∪ R l , where r e is an instanceOf relation, r c is a subClassOf relation, and R l is the instance relation set. Therefore, the triple set S can be divided into three disjoint subsets: , where i ∈ I is an instance, c ∈ C is a concept, and n e is the size of S e .

SubClassOf triple set
, where h, r ∈ I are head instance and tail instance, r ∈ R l is an instance relation, and n l is the size of S l .
Given knowledge graph KG, knowledge graph embedding with concepts and instances aims at learning embeddings for instances, concepts, and relations in the same space R k . For each concept c ∈ C, we learn a sphere s(p, m) with p ∈ R k and m denoting the sphere center and radius. For each instance i ∈ I and instance relation r ∈ R l , we learn a low-dimensional vector i ∈ R k and r ∈ R k respectively. Specifically, the instanceOf and subClassOf representations are well-designed so that the transitivity of isA relations can be reserved, namely, instanceOf-subClassOf : and subClassOf-subClassOf transitivity shown in Equation 6: Based on the definition, how to model concepts and isA relations is critical to solve this problem.

Our Approach
To differentiate between concepts and instances for knowledge graph embedding, we propose a novel method named TransC. We define different loss functions to measure the relative positions in embedding space, and then jointly learn the representations of concepts, instances, and relations based on the translation-based models.

TransC
We have three kinds of triples in our triple set S and define different loss function for them respectively.
InstanceOf Triple Representation. For a given instanceOf triple (i, r e , c), if it is a true triple, i should be inside the sphere s to represent the instanceOf relation between them. Actually, there is another relative position that i is outside the sphere s. In this condition, the embeddings still need to be optimized. The loss function is defined as SubClassOf Triple Representation. For a subClassOf triple (c i , r c , c j ), just like before, concepts c i , c j are encoded as spheres s i (p i , m i ) and s j (p j , m j ). We first denote the distance between the centers of the two spheres as If (c i , r c , c j ) is a true triple, sphere s i should be inside sphere s j (Figure 2a) to represent the subClassOf relation between them. Actually, there are three other relative positions between sphere s i and s j (as shown in Figure 2). We also have three loss functions under these three conditions: 1. s i is separate from s j (Figure 2b). The embeddings still need to be optimized. In this condition, the two spheres need to get closer in optimalization. Therefore, the loss function is defined as 2. s i intersects with s j (Figure 2c). This condition is similar to condition 1. The loss function is defined as 3. s j is inside s i (Figure 2d). It is different from our target and we should reduce m j and increase m i . Hence, the loss function is Relational Triple Representation. For a relational triple (h, r, t), TransC will learn lowdimensional vectors h, t, r ∈ R k for instances and relations. Just like TransE (Bordes et al., 2013), the loss function of this kind of triples is defined as After having embeddings above, TransC can easily deal with the transitivity of isA relations. If we have true triples (i, r e , c i ) and (c i , r c , c j ), which means i is inside the sphere s i and s i is inside s j , we can get a result that i is also inside the sphere s j . It can be concluded that (i, r e , c j ) is a true triple and TransC can handle instanceOf-subClassOf transitivity. Similarly, if we have true triples (c i , r c , c j ) and (c j , r c , c k ), we can get a result that sphere s i is inside sphere s k . It means (c i , r e , c k ) is a true triple and TransC can deal with subClassOf-subClassOf transitivity.

Training Method
For instanceOf triples, we use ξ and ξ to denote a positive triple and a negative triple. S e and S e are used to describe the positive triple set and negative triple set. Then we can define a marginbased ranking loss for instanceOf triples: (13) where [x] + max (0, x) and γ e is the margin separating positive triplets and negative triplets. Similarly, for subClassOf triples, we will have a ranking loss: (14) and for relational triples, we will have a ranking loss: Finally, we define the overall loss function as linear combinations of these three functions: The goal of training TransC is to minimize the above function, and iteratively update embeddings of concepts, instances, and concepts. Every triple in our training set has a label to indicate whether the triple is positive or negative. But existing knowledge graph only contains positive triples. We need to generate negative triples by corrupting positive triples. For a relational triple (h, r, t), we replace h or t to generate a negative triple (h , r, t) or (h, r, t ). For example, we get h by randomly picking from a set M t = M 1 ∪ M 2 ∪ · · · ∪ M n , where n is the number of concepts that t belongs to and M i = {a|a ∈ I ∧ (a, r e , c i ) ∈ S e ∧ (t, r e , c i ) ∈ S e ∧ t = a}. For the other two kinds of triples, we follow the same policy to generate negative triples. We also use two strategies "unif" and "bern" described in (Wang et al., 2014) to replace instances or concepts.

Experiments and Analysis
We evaluate our method on two typical tasks commonly used in knowledge graph embedding: link prediction (Bordes et al., 2013) and triple classification (Socher et al., 2013).

Datasets
Most previous work used FB15K and WN18 (Bordes et al., 2013) for evaluation. But these two datasets are not suitable for our model because FB15K mainly consists of instances and WN18 mainly contains concepts. Therefore, we use another popular knowledge graph YAGO (Suchanek et al., 2007) for evaluation, which contains a lot of concepts from WordNet and instances from Wikipedia. We construct a subset of YAGO named YAGO39K for evaluation through the following steps: (1) We randomly select some relational triples like (h, r, t) from the whole YAGO dataset as our relational triple set S l .
(2) For every instance and instance relation existed in our relational triples, we save it to construct instance set I and instance relation set R l respectively.
(3) For every instanceOf triple (i, r e , c) in YAGO, if i ∈ I, we save this triple to construct instanceOf triple set S e .
(4) For every concept existed in instanceOf triple set S e , we save it to construct concept set C.
(5) For every subClassOf triple (c i , r c , c j ) in YAGO, if c i ∈ C ∧ c j ∈ C, we save this triple to construct subClassOf triple set S c .
(6) Finally, we achieve our triple set S = S e ∪ S c ∪ S l and our relation set R = {r e , r c } ∪ R l .
To evaluate every model's performance in handling the transitivity of isA relations, we generate some new triples based on YAGO39K using the transitivity of isA relations. These new triples will  be added to valid and test datasets of YAGO39K to create a new dataset named M-YAGO39K. Specific steps are described as follows: (1) For every instanceOf triple (i, r e , c) in valid and test dataset, if (c, r c , c j ) exists in training dataset, we save a new instanceOf triple (i, r e , c j ).
(2) For every subClassOf triple (c i , r c , c j ) in valid and test dataset, if (c j , r c , c k ) exists in training dataset, we save a new subClassOf triple (c i , r c , c k ).
(3) We add these new triples to valid and test dataset of YAGO39K to get M-YAGO39K.
The statistics of YAGO39K and M-YAGO39K are shown in Table 1.

Link Prediction
Link Prediction aims to predict the missing h or t for a relational triple (h, r, t). In this task, we need to give a ranking list of candidate instances from the knowledge graph, instead of only giving one best result.
For every test relational triple (h, r, t), we remove the head or tail instance and replace it with all instances existed in knowledge graph, and rank these instances in ascending order of distances calculated by loss function f r . Just like (Bordes et al., 2013), we use two evaluation metrics in this task: (1) the mean reciprocal rank of all correct instances (MRR) and (2) the proportion of correct instances that rank no larger than N (Hits@N). A good embedding model should achieve a high MRR and a high Hits@N. We note that a corrupted triple may also exist in knowledge graph, which should also be regarded as a correct prediction. However, the above evaluations do not handle this issue and may underestimate the results. Hence, we filter out every triple appeared in our knowledge graph before getting the ranking list. The first evaluation setting is called "Raw" and the second one is called "Filter." We report the experiment results on both settings.
Evaluation results on YAGO39K are shown in Table 2. From the table, we can conclude that: (1) TransC significantly outperforms other models in terms of Hits@N. This indicates that TransC can use isA triples' information better than other models, which is helpful for instance representation learning. (2) TransC performs a little bit worse than DistMult in some settings. The reason may be that we determine the best configurations only according to the Hits@10, which may lead to a low MRR. (3) The "bern" sampling trick works well for TransC.

Triple Classification
Triple Classification aims to judge whether a given triple is correct or not, which is a binary classification task. This triple can be a relational triple, an instanceOf triple or a subClassOf triple.
Negative triples are needed for evaluation of binary classification. Hence, we construct some negative triples following the same setting in (Socher et al., 2013). There are as many true triples as negative triples in both valid and test set.
For triple classification, we set a threshold δ r for every relation r. For a given test triple, if its   loss function is smaller than δ r , it will be classified as positive, otherwise negative. δ r is obtained by maximizing the classification accuracy on valid set.
In this task, we use dataset YAGO39K and M-YAGO39K for evaluation. Parameters are selected in the same way as in link prediction. The best configurations are determined by accuracy in valid set. The optimal configurations for YAGO39K are: γ l = 1, γ e = 0.1, γ c = 0.1, λ = 0.001, n = 100 and taking L 2 as dissimilarity. The optimal configurations for M-YAGO39K are: γ l = 1, γ e = 0.1, γ c = 0.3, λ = 0.001, n = 100 and taking L 2 as dissimilarity. For both datasets, we traverse all the training triples for 1000 rounds.
Our datasets have three kinds of triples. Hence, we do experiments on them respectively. Experimental results for relational triples, instanceOf triples, and subClassOf triples are shown in Table 2, Table 3, and Table 4 respectively. In Table  3 and Table 4, a rising arrow means performance of this model have a promotion from YAGO39K to M-YAGO39K and a down arrow means a drop.
From Table 2, we can learn that: (1) TransC outperforms all previous work in relational triple classification.
From Table 3 and Table 4, we can conclude that: (1) On YAGO39K, some compared models perform better than TransC in instanceOf triple classification. This is because that instanceOf has most triples (53.5%) among all relations in YAGO39K. This relation is trained superabundant times and nearly achieves the best performance, which has an adverse effect on other triples. TransC can find a balance between them and all triples achieve a good performance. (2) On YAGO39K, TransC outperforms other models in subClassOf triple classification. As shown in Table 1, subClassOf triples are much less than instanceOf triples. Hence, other models can not achieve the best performance under the bad influence of instanceOf triples. (3) On M-YAGO39K, TransC outperforms previous work in both instanceOf triple classification and subClassOf triple classification, which indicates that TransC can handle the transitivity of isA relations much better than other models. (4) After comparing experimental results in YAGO39K and M-YAGO39K, we can find that most previous work's performance suffers a big drop in instanceOf triple classification and a small drop in subClassOf triple classification. This shows that previous work can not deal with instanceOf-subClassOf transitivity well. (5) In TransC, nearly all performances have a significant promotion from YAGO39K to M-YAGO39K. Both instanceOf-subClassOf transitivity and subClassOf-subClassOf transitivity are solved well in TransC.

Case Study
We have shown that TransC have a good performance for knowledge graph embedding and dealing with transitivity of isA relations. In this section, we present an example of finding new instanceOf triples and subClassOf triples using results of TransC.
As shown in Figure 3, New York City is an instance and others are concepts. The solid lines represent the triples from our datasets and the dotted lines represent the facts inferred by our model. TransC can find two new instanceOf triples (New York City, instanceOf, City) and (New York City, instanceOf, Municipality). It can also find a new subClassOf triple (Port Cities, subClassOf, City). Following the transitivity of isA relations, we can know all these three new triples are right. Unfortunately, most previous work regards these three triples as wrong, which means they can not handle transitivity of isA relations well.

Conclusion and Future Work
In this paper, we propose a new knowledge embedding model named TransC. TransC embeds instances, concepts, and relations in the same space to deal with the transitivity of isA relations. We create a new dataset YAGO39K for evaluation. Experiment results show that TransC outperforms previous translation-based models in most cases. Besides, It can also handle the transitivity of isA relations much better than other models. In our future work, we will explore the following research directions: (1) Sphere is a simple model to represent a concept in semantic space, but it still have some limits since it is too naive. we will try to find a more expressive model instead of spheres to represent concepts.
(2) A concept may have different meanings in different triples. We will try to use several typical vectors of instances as a concept's centers to represent different meanings of a concept. Then a concept can have different embeddings in different triples.