Knowledge Graph Embeddings in Geometric Algebras

Knowledge graph (KG) embedding aims at embedding entities and relations in a KG into a low dimensional latent representation space. Existing KG embedding approaches model entities and relations in a KG by utilizing real-valued , complex-valued, or hypercomplex-valued (Quaternion or Octonion) representations, all of which are subsumed into a geometric algebra. In this work, we introduce a novel geometric algebra-based KG embedding framework, GeomE, which utilizes multivector representations and the geometric product to model entities and relations. Our framework subsumes several state-of-the-art KG embedding approaches and is advantageous with its ability of modeling various key relation patterns, including (anti-)symmetry, inversion and composition, rich expressiveness with higher degree of freedom as well as good generalization capacity. Experimental results on multiple benchmark knowledge graphs show that the proposed approach outperforms existing state-of-the-art models for link prediction.


Introduction
Knowledge graphs (KGs) are directed graphs where nodes represent entities and (labeled) edges represent the types of relationships among entities. This can be represented as a collection of triples (h, r, t), each representing a relation r between a "head-entity" h and an "tail-entity" t. Some real-world knowledge graphs include Freebase (Bollacker et al., 2008), WordNet (Miller, 1995), YAGO (Suchanek et al., 2007), and DBpedia (Auer et al., 2007).
However, most existing KGs are incomplete. The task of link prediction alleviates this drawback by inferring missing facts based on the known facts in a KG and thus has gained growing interest. Embedding KGs into a low-dimensional space and learning latent representations of entities and relations in KGs is an effective solution for this task. In general, most existing KG embedding models learn to embed KGs by optimizing a scoring function which assigns higher scores to true facts than invalid ones.
Recently, learning KG embeddings in the complex or hypercomplex spaces has been proven to be a highly effective inductive bias. ComplEx (Trouillon et al., 2016), RotatE, pRotatE (Sun et al., 2019), and QuatE (Zhang et al., 2019) achieved the state-of-the-art results on link prediction, due to their abilities of capturing various relations (i.e., modeling symmetry and anti-symmetry). They both use the asymmetrical Hermitian product to score relational triples where the components of entity/relation embeddings are complex numbers or quaternions.
Complex numbers and quaternions can be described by the various components within a Clifford multivector (Chappell et al., 2015). In other words, the geometric algebra of Clifford (1882) provides an elegant and efficient rotation representation in terms of multivector which is more general than Hamilton (1844)'s unit quaternion.
In this paper, we propose a novel KG embedding approach, GeomE, which is based on Clifford multivectors and the geometric product. Concretely, we utilize N multivector embeddings of N grades (N = 2, 3) to represent entity and relation. Each component of an entity/relation embedding is a multivector in a geometric algebra of N grades, G N , with scalars, vectors and bivectors, as well as trivectors This work is licensed under a Creative Commons Attribution 4.0 International License. License details: http:// creativecommons.org/licenses/by/4.0/.
(for N = 3). In terms of a triple (h, r, t), we use an asymmetrical geometric product which involves the conjugation of the embedding of the tail entity to multiply the embeddings of e s , r, e o , and obtain the final score of the triple from the product embedding.
The advantages of our formulas include the following points: • Our framework GeomE subsumes ComplEx, pRotatE and QuatE. A complex number can be regarded as a scalar plus a bivector in the geometric algebra G 2 . A quaternion is isomorphic with a scalar plus three bivectors in the geometric algebra G 3 . Thus, GeomE inherits the excellent properties of pRotatE, ComplEx and QuatE and has the ability to model various relation patterns, e.g., (anti-)symmetry, inversion and composition. • The geometric product units the Grassmann (1844) and Hamilton (1844) algebras into a single structure. Compared to the Hamilton operator used in QuatE, the geometric product provides a greater extent of expressiveness since it involves the operator for vectors, trivectors and n-vectors, in addition to scalars and bivectors. • Our proposed approach GeomE is not just a single KG embedding model. GeomE can be generalized in the geometric algebras of different grades and is hence more flexible in the expressiveness compared to pRotatE, ComplEx and QuatE. In this paper, we propose two new KG embedding models, i.e., GeomE2D and GeomE3D, based on multivectors from G 2 and G 3 , and test their combination model GeomE+.

Related Work
Most KG embedding models can be classified as distance-based or semantic matching based, according to their scoring functions.
Distance-based scoring functions aim to learn embeddings by representing relations as translations from head entities to tail entities. Bordes et al. (2013) proposed TransE by assuming that the added embedding of s and r should be close to the embedding of o. Since that, many variants and extensions of TransE have been proposed. For example, TransH (Wang et al., 2014) projects entities and relations into a hyperplane. TransR (Lin et al., 2015) introduces separate projection spaces for entities and relations. TransD (Ji et al., 2015) uses independent projection vectors for each entity and relation and can reduce the amount of calculation compared to TransR. TorusE (Ebisu and Ichise, 2018) defines embeddings and distance function in a compact Lie group, torus. The recent distance-based KG embedding models, RotatE and pRotatE (Sun et al., 2019), propose a rotation-based distance scoring functions with complexvalued embeddings. Likewise, TransComplEx (Nayyeri et al., 2019) also maps entities and relations into a complex-valued vector space.
On the other hand, semantic matching models include RESCAL (Nickel et al., 2011), DistMult (Yang et al., 2014), ComplEx (Trouillon et al., 2016), SimplE (Kazemi andPoole, 2018) and QuatE (Zhang et al., 2019). In RESCAL, each relation is represented with a square matrix, while DistMult replaces it with a diagonal matrix in order to reduce the complexity. SimplE is also a simple yet effective bilinear approach for knowledge graph embedding. ComplEx embeds entities and relations in a complex space and utilizes an asymmetric Hermitian product to score triples, which is immensely helpful in modeling various relation patterns. QuatE extends ComplEx in a hypercomplex space and replaces the Hermitian product with the Hamilton product which provides a greater extent of expressiveness. In addition, neural network based KG embedding models have also been proposed, e.g., NTN (Socher et al., 2013), ConvE (Dettmers et al., 2018), ConvKB (Nguyen et al., 2019 and IteractE (Vashishth et al., 2020).
Our proposed approach, GeomE, subsumes ComplEx, pRotatE and QuatE in the geometric algebras. In addition to the inheritance of the attractive properties of these existing KG embedding models, our approach takes advantages of the multivectors, e.g., the rich geometric meanings, the excellent representation ability and the generalization ability in the geometric algebras of different grades. Due to the above merits of the geometric algebras and multivectors, they have also been widely applied in computer vision and neurocomputing (Bayro-Corrochano, 2018).

Geometric Algebra and Multivectors
Leaning on the earlier concepts of Grassmann (1844)'s exterior algebra and Hamilton (1844)'s quaternions, Clifford (1882) intended his geometric algebra to describe the geometric properties of scalars, vectors and eventually higher dimensional objects. In addition to the well known scalar and vector elements, there are bivectors, trivectors, n-vectors and multivectors which are higher dimensional generalisations of vectors. An N -dimensional vector space R N can be embedded in a geometric algebra of N grades , G N . In this section, we take G 2 and G 3 as examples to introduce multivectors and some corresponding operators. Hence an arbitrary 3-grade multivector M ∈ G 3 can be written as M = a 0 + a 1 e 1 + a 2 e 2 + a 3 e 3 + a 12 e 1 e 2 + a 23 e 2 e 3 + a 13 e 1 e 3 + a 123 e 1 e 2 e 3 , where a 0 , a 1 , a 2 , a 3 , a 12 , a 23 , a 13 , a 123 are all real numbers. Each element of a multivector, e.g., a scalar, a vector, or an N-vector, is called as a blade. A 2-grade multivector M ∈ G 2 is build from one scalar, two vectors and one bivector. M = a 0 + a 1 e 1 + a 2 e 2 + a 12 e 1 e 2 . The norm of a multivector is equal to the root of the square sum of real values of all blades. Taking the 2-grade multivector as an example, its norm is defined as: ||M || = a 2 0 + a 2 1 + a 2 2 + a 2 12 .

Multivectors vs Quaternions
Quaternions are elements of the form: Q = q 0 + q 1 i + q 2 j + q 3 k, where q 0 , q 1 , q 2 , q 3 are real numbers and i, j, k are three different square roots of -1 and are the new elements used for the construction of quaternions. They have the following algebraic properties: i 2 = j 2 = k 2 = ijk = −1 Bivectors from G 3 have similar algebraic properties as the basis of the quaternion space.
(e i e j ) 2 = −e i e j e j e i = −1 where i, j = 1, 2, 3, and i = j e 1 e 2 e 2 e 3 e 1 e 3 = e 1 e 3 e 1 e 3 = −1 Thus we can embed a quaternion in a 3-grade geometric algebra G 3 with a scalar plus three bivectors. A complex number can likewise be regarded as a scalar plus one bivector from G 2 .

Geometric Product and Clifford Conjugation
Geometric algebra also introduces a new product, geometric product, as well as three multivector involutions, space inversion, reversion and Clifford conjugation.
The geometric product of two multivectors comprises of multiplications between scalars, bivectors, trivectors and n-vectors. The product of two 2-grade multivectors M a = a 0 + a 1 e 1 + a 2 e 2 + a 12 e 1 e 2 and M b = b 0 + b 1 e 1 + b 2 e 2 + b 12 e 1 e 2 from G 2 is equal to The product of two 3-grade multivectors M a = a 0 + a 1 e 1 + a 2 e 2 + a 3 e 3 + a 12 e 1 e 2 + a 23 e 2 e 3 + a 13 e 1 e 3 + a 123 e 1 e 2 e 3 and M b = b 0 + b 1 e 1 + b 2 e 2 + b 3 e 3 + b 12 e 1 e 2 + b 23 e 2 e 3 + b 13 e 1 e 3 + b 123 e 1 e 2 e 3 from G 3 is represented in Appendix B.
Clifford Conjugation: The Clifford Conjugation of an n-grade multivector M is a subsequent composition of space inversion M * and reversion M † as M = M † * , where space inversion M * is obtained by changing e i to −e i and reversion is obtained by reversing the order of all products i.e. changing e 1 e 2 · · · e n to e n e n−1 · · · e 1 . For example, the conjugation of M ∈ G 2 , which is formed as Note that the product of a multivector M and its conjugation M is always a scalar. For a given 2-grade multivector M = a 0 + a 1 e 1 + a 2 e 2 + a 12 e 1 e 2 , we have producing a real number, though not necessarily non-negative.

Knowledge Graph Embedding Model based on Geometric Algebras
Let E denote the set of all entities and R the set of all relations present in a knowledge graph. A triple is represented as (h, r, t), with h, t ∈ E denoting head and tail entities respectively and r ∈ R the relation between them. We use Ω = {(h, r, t)} ⊆ E × R × E to denote the set of observed triples. The key issue of KG embeddings is to represent entities and relations in a continuous low-dimensional space. Our approach GeomE uses the geometric product and multivectors for KG embedding. In this paper, we propose two models built with our approach, GeomE2D and GeomE3D, based on 2-grade multivectors and 3-grade multivectors respectively.
GeomE2D represents each entity/relation as a k dimensional embedding M where each element is a 2-grade multivector, i.e., M = [M 1 , . . . , M k ], M i ∈ G 2 , i = 1, . . . , k, where k is the dimensionality of embeddings. Given a triple (h, r, t), we represent embeddings of h, r and t by . Scoring Function of GeomE is defined as the scalar of the product of the embeddings of h, r and t by using the geometric product and the Clifford conjugation.
where n = 2 for GeomE2D and n = 3 for GeomE3D, ⊗ n denotes element-wise Geometric Product between two k dimensional n-grade multivectors (e.g.

Training
Most of previous semantic matching models, e.g., ComplEx, are learned by minimizing a sampled binary logistic loss function (Trouillon et al., 2016). Motivated by the solid results in (Lacroix et al., 2018), we formulate the link prediction task as a multiclass classification problem by using a full multiclass logsoftmax loss function, and apply N3 regularization and reciprocal approaches for our models. Given a training set Ω (h, r, t), we create a reciprocal training set Ω * (t, r −1 , h) by adding reverse relations and the instantaneous multiclass log-softmax loss is defined as: N3 regularization and reciprocal learning approaches have been proven to be helpful in boosting the performances of semantic matching models (Lacroix et al., 2018;Zhang et al., 2019). Different from the sampled binary logistic loss function which generates a certain number of negative samples for each training triple by randomly corrupting the head or tail entity, the full multiclass log-softmax considers all possible negative samples and thus has a fast converge speed. On FB15K, the training process of a GeomE3D model with a high dimensionality of k = 1000 needs less than 100 epochs and cost about 4 minutes per epoch on a single GeForce RTX 2080 device.

Connection to QuatE, Complex and pRotatE
As mentioned in Section 3, a bivector unit in a geometric algebra has similar properties to an imaginary unit in a complex or hypercomplex space. Thus, a quaternion is isomorphic with a 3-grade multivector consisting of a scalar and three bivectors, and a complex value can be regarded as a 2-grade multivector consisting of a scalar and one bivector.
Subsumption of QuatE: By setting the coefficients of vectors and trivectors of M h , M r , and M t in Equation 5 to zero, we obtain the following equations for GeomE3D where • denotes Hadamard product, h j = [h 1 j , . . . , h k j ], j ∈ {0, 12, 23, 13}. We can find that Equation 7 recovers the form of the scoring function of QuatE regardless of the normalization of the relational quaternion. Therefore, GeomE3D subsumes the QuatE model. Subsumption of ComplEx: By setting the coefficient of vectors M h , M r , and M t to zero in Equation 5 for GeomE2D, we obtain The Equation 8 recovers the form of the scoring function of ComplEx. Therefore, GeomE2D subsumes the ComplEx model. Additinally, comparing equations 7 and 8, we conclude that GeomE3D also subsumes ComplEx.
Subsumption of pRotatE: Apart from ComplEx and QuatE, GeomE also subsumes pRotatE. We start from the formulation of the scoring function of pRotatE and show that the scoring function is a special case of Equation 8. The scoring function of pRotatE is defined as where the modulus of each element of relation vectors is |r i | = 1, i = 1, . . . , k, and |h i | = |t i | = C ∈ R + . After some derivation on the score of pRotatE and GeomE2D (see details in Appendix D), we can obtain φ GeomE2D (h, r, t) = φ pRotatE 2 (h,r,t)−2kC 2 2 . Note that 2kC 2 is a constant number as k and C, and thus does not affect the overall ranking obtained by computing and sorting the scores of triples.
For a triple (h, r, t), there is a positive correlation between its GeomE score and pRotatE score since pRotatE scores are always non-positive. Therefore, GoemE2D and consequently GeomE3D subsumes the pRotatE model in the terms of ranking.
Overall, it can be seen that our framework subsumes ComplEx, pRotatE and QuatE and provides more degrees of freedom by introducing vectors and trivectors. In addition, our framework can be generalized into the geometric algebras with higher grades (G n , n > 3) and is hence more flexible in expressiveness.
Although we introduce more coefficients in our framework, our models have the same time complexity as pRotatE, ComplEx and QuatE as shown in Table 1. And the memory sizes of our models increase linearly with the dimensionality of embeddings.

Ability of Modeling Various Relation Patterns
Our framework subsumes pRotatE, ComplEx and QuatE, and thus inherits their attractive properties: One of the merits of our framework is the ability of modeling various patterns including symmetry/antisymmetry, inverse and composition. We give the formal definitions of these relation patterns.
GeomE can infer and model various relation patterns defined above by taking advantages of the flexibility and representational power of geometric algebras and the geometric product.
(Anti-)symmetry: By utilizing the conjugation of embeddings of tail entities, our framework can model (anti-)symmetry patterns. The symmetry property of GeomE2D and GeomE3D can be proved by enforcing the coefficients of vectors and bivectors in relation embeddings to be zero. On the other hand, their scoring functions are asymmetric about relations when the coefficients of vectors and bivetors in relation embeddings are nonzero. For GeomE2D, the difference score of (h, r, t) and (t, r, h) is equal to This difference is equal to zero when r 1 , r 2 , r 12 = 0. Embeddings of multiple symmetric relations could still express their different semantics since their scalar parts might be different. Inversion: As for a pair of inverse relations r and r , the scores of (h, r, t) and (t, r , h) are equal when M r = M r . Concretely, the difference score of (h, r, t) and (t, r , h) is equal to This difference is equal to zero when r 0 = r 0 , r 1 = −r 1 , r 2 = −r 2 , r 12 = −r 12 .
Composition: GeomE can also model composition patterns by introducing some constraints on embeddings. The detailed proof can be found in Appendix E.

Experimental Setup
Datasets We use four widely used KG benchmarks for evaluating our proposed models, i.e., FB15K, WN18, FB15K-237 and WN18RR. The statics of these datasets are listed in Table 6. FB15K and WN18 are introduced in (Bordes et al., 2013). The former is extracted from FreeBase (Bollacker et al., 2008), and the latter is a subsampling of WordNet (Miller, 1995). It is firstly discussed in (Toutanova et al., 2015) that WN18 and FB15K suffer from test leakage through inverse relations, i.e. many test triples can be obtained simply by inverting triples in the training set. To address this issue, Toutanova et al. (2015) generated FB15K-237 by removing inverse relations in FB15K. Likewise, Dettmers et al (Dettmers et al., 2018) generated WN18RR by removing inverse relations in WN18. The recent literature shows that FB15K-237 and WN18RR are harder to fit and thus more challenging for new KG embedding models. The details of dataset statistics are listed in the Appendix F.
Evaluation Protocols Link prediction is to complete a fact with a missing entity. Given a test triple (h, r, t), we corrupt this triple by replacing h or t with all possible entities, sort all the corrupted triples based on their scores and compute the rank of the test triple. Three evaluation metrics are used here, Mean Rank (MR), Mean Reciprocal Rank (MRR) and Hits@k. We also apply the filtered setting proposed in (Bordes et al., 2013).

Experimental Results
Link prediction: Results on four datasets are shown in Tables 2 and 3. GeomE3D and GeomE2D, as single models, surpass other baselines on FB15K regarding all metrics. GeomE3D and GeomE2D achieve the state-of-the-art results on WN18 except Hits@10 and MR. On FB15K-237 and WN18RR where the local information is less salient, the results of GeomE3D and GeomE2D are close to QuatE 2 .    (Dettmers et al., 2018). Best results are written in bold.
The effect of the grade of the multivector space: Our approach can be generalized in the geometric algebras G N with different grades N . In this paper, we mainly focus on GeomE models embeded in G 2 and G 3 . We do not use multivectors with higher grade N > 3 in this paper because that would increase the time consumption and memory sizes of training GeomE models and the results of GeomE3D and GeomE2D on the four benchmarks are close. On the other hand, we also test the performances of GeomE1D where each multivector consists of a scalar plus a vector, and find the results drop since the 1-grade multivectors lose some algebra properties after bivectors which square is −1 are removed.
The effect of the embedding dimensionality: Figure 2 shows the link prediction results of GeomE2D models with different embedding dimensionalities k = {20, 50, 100, 200, 500, 1000} on FB15K-237 and WN18RR regarding MRR and Hits@10. It can be seen that the performances of GeomE2D improve   with the increasing of the embedding dimensionality. We follow the previous work (Zhang et al., 2019;Sun et al., 2019) to set the maximum dimensionality to 1000 in order to avoid too much memory and time consumption. It will still be interesting to explore the performances of GeomE models with higherdimensional embeddings, e.g., Ebisu et al. (2018) use 10000-dimensional embeddings for TorusE. Figure 3: Visualization of the embeddings of symmetric and inverse relations. 100-dimensional embeddings are reshaped into 10 × 10 matrices here for a better representation.
Modeling symmetry and inversion: In FB15K, sibling relationship is a typical symmetric relation. By constraining φ(h, sibling relationship, t) ≈ φ(t, sibling relationship, h) during the training process, we find that the vector and bivector parts of its embedding learned by a 100-dimensional GeomE2D are close to zero as shown in Figure 3. For a pair of inverse relations film/film format and film format/film in FB15K, their embeddings are matually conjugate by constraining φ(h, film/film format, t) ≈ φ(t, film format/film, h). These results support our arguments in Section 4.4 and empirically prove GeomE's ability of modeling symmetric and inverse relations.

Conclusion
We propose a new gemetric algebra-based approach for KG embedding, GeomE, which utilizes multivector representations to model entities and relations in a KG with the geometric product. Our approach subsumes several state-of-the-art KG embedding models, and takes advantages of the flexibility and representational power of geometric algebras to enhance its generalization capacity, enrich its expressiveness with higher degree of freedom and enable its ability of modeling various relation patterns. Experimental results show that our approach achieves the state-of-the-art results on four well-known benchmarks.

Appendix B The Geometric Product of 3-grade Multivectors
The product of two 3-grade multivectors M a = a 0 +a 1 e 1 +a 2 e 2 +a 3 e 3 +a 12 e 1 e 2 +a 23 e 2 e 3 +a 13 e 1 e 3 + a 123 e 1 e 2 e 3 and M b = b 0 + b 1 e 1 + b 2 e 2 + b 3 e 3 + b 12 e 1 e 2 + b 23 e 2 e 3 + b 13 e 1 e 3 + b 123 e 1 e 2 e 3 from G 3 is represented as follows.
(12) (Anti-)Symmetry By utilizing the conjugation of embeddings of tail entities, our framework can model both two patterns. Concretely, considering equation 18 and 19, we show GeomE models symmetric pattern by φ(h, r, t) − φ(t, r, h) = 0 as follows Let assume that the matrix M at(M r i ) is householder. Therefore, it has two eigenvalues {−1, 1}. The eigenvectors corresponding to -1 are orthogonal to the eigenvalues corresponding to 1. There are two conditions • Both of vec(M t i ), vec(M h i ) are eignevectors (orthogonal) corresponding to different eigenvalues. Therefore, we have The abovementioned equation equals to zero if for either vec(M h i ) or vec(M t i ), the elements (coefficents of vector parts in this case) corresponding to the negative sign (-1) of1 will be zero.
The abovementioned conditions hold for each multivector element M i of the k-dimensional embeddings M, i.e. i = 1, . . . , k. Therefore, there are 2 k possible options (capacity of model) to have φ(h, r, t) − φ(t, r, h) = 0 (modeling symmetric pattern). Inversion Given two relations r 1 , r 2 which form inverse pattern i.e. r 1 = r −1 2 (e.g. r 1 =SonOf, r 2 =FatherOf ), we show that GeomE models inverse pattern by φ(h, r 1 , t) − φ(t, r 2 , h) = 0. This is proved as follows Let assume that the matrices of M at(M r 1i ) and M at(M r 2i ) have same eigenvalues λ 1i = λ 2i . Therefore, we have Since for n-grade multivector, there are 2 n variables in the vector, the corrsponding matrices M at(M r 1i ), M at(M r 2i ) are 2 n × 2 n dimensional. Since a 2 n × 2 n matrix has at most 2 n distinct