Are Girls Neko or Shōjo? Cross-Lingual Alignment of Non-Isomorphic Embeddings with Iterative Normalization

Cross-lingual word embeddings (CLWE) underlie many multilingual natural language processing systems, often through orthogonal transformations of pre-trained monolingual embeddings. However, orthogonal mapping only works on language pairs whose embeddings are naturally isomorphic. For non-isomorphic pairs, our method (Iterative Normalization) transforms monolingual embeddings to make orthogonal alignment easier by simultaneously enforcing that (1) individual word vectors are unit length, and (2) each language’s average vector is zero. Iterative Normalization consistently improves word translation accuracy of three CLWE methods, with the largest improvement observed on English-Japanese (from 2% to 44% test accuracy).


Orthogonal Cross-Lingual Mappings
Cross-lingual word embedding (CLWE) models map words from multiple languages to a shared vector space, where words with similar meanings are close, regardless of language.CLWE is widely used in multilingual natural language processing (Klementiev et al., 2012;Guo et al., 2015;Zhang et al., 2016).Recent CLWE methods (Ruder et al., 2017;Glavas et al., 2019) independently train two monolingual embeddings on large monolingual corpora and then align them with a linear transformation.Previous work argues that these transformations should be orthogonal (Xing et al., 2015;Smith et al., 2017;Artetxe et al., 2016): for any two words, the dot product of their representations is the same as the dot product with the transformation.This preserves similarities and substructure of the original monolingual word embedding but enriches the embeddings with multilingual connections between languages.
While recent work challenges the orthogonal assumption (Doval et al., 2018;Joulin et al., 2018;Jawanpuria et al., 2019), we focus on whether simple preprocessing techniques can improve the suitability of orthogonal models.Our iterative method normalizes monolingual embeddings to make their structures more similar (Figure 1), which improves subsequent alignment.
Our method is motivated by two desired properties of monolingual embeddings that support orthogonal alignment: 1. Every word vector has the same length.2. Each language's mean has the same length.
Standard preprocessing such as dimension-wise mean centering and length normalization (Artetxe et al., 2016) do not meet the two requirements at the same time.Our analysis leads to Iterative Normalization, an alternating projection algorithm that normalizes any word embedding to provably satisfy both conditions.1After normalizing the monolin- .

Iterative Normalization
Figure 1: The most similar Japanese words for 少女 (shōjo "girl") and English words for "girl", measured by cosine similarity on Wikipedia fastText vectors, before (left) and after (right) Iterative Normalization.In the original embedding spaces, "boy" is the nearest neighbor for both languages but with a very different cosine similarity, and "cat" in English is not close to "girl": both violate the isomorphism assumed by an orthogonal transformation for cross-lingual representations.Iterative Normalization replaces 猫 (neko "cat") with the more relevant 美少女 (bishōjo "pretty girl") and brings cosine similarities closer.
gual embeddings, we then apply mapping-based CLWE algorithms on the transformed embeddings.
We empirically validate our theory by combining Iterative Normalization with three mapping-based CLWE methods.Iterative Normalization improves word translation accuracy on a dictionary induction benchmark across thirty-nine language pairs.

Learning Orthogonal Mappings
This section reviews learning orthogonal crosslingual mapping between word embeddings and, along the way, introduces our notation.
We start with pre-trained word embeddings in a source language and a target language.We assume 2 all embeddings are d-dimensional, and the two languages have the same vocabulary size n.Let X ∈ R d×n be the word embedding matrix for the source language, where each column x i ∈ R d is the representation of the i-th word from the source language, and let Z ∈ R d×n be the word embedding matrix for the target language.Our goal is 2 Word translation benchmarks use the same assumptions.
to learn a transformation matrix W ∈ R d×d that maps the source language vectors to the target language space.While our experiments focus on the supervised case with a seed dictionary D with translation pairs (i, j), the analysis also applies to unsupervised projection.
One straightforward way to learn W is by minimizing Euclidean distances between translation pairs (Mikolov et al., 2013a).Formally, we solve: (1) Xing et al. (2015) further restrict W to orthogonal transformations; i.e., W W = I.The orthogonal constraint significantly improves word translation accuracy (Artetxe et al., 2016).However, this method still fails for some language pairs because word embeddings are not isomorphic across languages.To improve orthogonal alignment between non-isomorphic embedding spaces, we aim to transform monolingual embeddings in a way that helps orthogonal transformation.
When are two embedding spaces easily aligned?A good orthogonal mapping is more likely if word vectors have two properties: length-invariance and center-invariance.
Length-Invariance.First, all word vectors should have the same, constant length.Lengthinvariance resolves inconsistencies between monolingual word embedding and cross-lingual mapping objectives (Xing et al., 2015).During training, popular word embedding algorithms (Mikolov et al., 2013b;Pennington et al., 2014;Bojanowski et al., 2017) maximize dot products between similar words, but evaluate on cosine similarity.To make things worse, the transformation matrix minimizes a third metric, Euclidean distance (Equation 1).This inconsistency is naturally resolved when the lengths of word vectors are fixed.Suppose u ∈ R d and v ∈ R d have the same length, then Minimizing Euclidean distance is equivalent to maximizing both dot product and cosine similarity with constant word vector lengths, thus making objectives consistent.Length-invariance also satisfies a prerequisite for bilingual orthogonal alignment: the embeddings of translation pairs should have the same length.If a source word vector x i can be aligned to its target language translation z j = Wx i with an orthogonal matrix W, then where the second equality follows from the orthogonality of W. Equation ( 2) is trivially satisfied if all vectors have the same length.In summary, lengthinvariance not only promotes consistency between monolingual word embedding and cross-lingual mapping objective but also simplifies translation pair alignment.
Center-Invariance.Our second condition is that the mean vector of different languages should have the same length, which we prove is a pre-requisite for orthogonal alignment.Suppose two embedding matrices X and Z can be aligned with an orthogonal matrix W such that Z = WX.Let x = 1 n n i=1 x i and z = 1 n n i=1 z i be the mean vectors.Then z = Wx.Since W is orthogonal, In other words, orthogonal mappings can only align embedding spaces with equal-magnitude centers.
A stronger version of center-invariance is zeromean, where the mean vector of each language is zero.Artetxe et al. (2016) find that centering improves dictionary induction; our analysis provides an explanation.

Iterative Normalization
We now develop Iterative Normalization, which transforms monolingual word embeddings to satisfy both length-invariance and center-invariance.Specifically, we normalize word embeddings to simultaneously have unit-length and zero-mean.Formally, we produce embedding matrix X such that and Iterative Normalization transforms the embeddings to make them satisfy both constraints at the same time.Let x (0) i be the initial embedding for word i.We assume that all word embeddings are non-zero. 3For every word i, we iteratively transform each word vector x i by first making the vectors unit length, and then making them mean zero, Equation ( 5) and ( 6) project the embedding matrix X to the set of embeddings that satisfy Equation (3) and (4).Therefore, our method is a form of alternating projection (Bauschke and Borwein, 1996), an algorithm to find a point in the intersection of two closed sets by alternatively projecting onto one of the two sets.Alternating projection guarantees convergence in the intersection of two convex sets at a linear rate (Gubin et al., 1967;Bauschke and Borwein, 1993).Unfortunately, the unit-length constraint is non-convex, ruling out the classic convergence proof.Nonetheless, we use recent results on alternating non-convex projections (Zhu and Li, 2018)  = 0 for all i and k, then the sequence X (k) produced by Iterative Normalization is convergent.
All embeddings in our experiments satisfy the non-zero assumption; it is violated only when all words have the same embedding.In degenerate cases, the algorithm might converge to a solution that does not meet the two requirements.Empirically, our method always satisfy both constraints.
Previous approach and differences.Artetxe et al. (2016) also study he unit-length and zeromean constraints, but our work differs in two aspects.First, they motivate the zero-mean condition based on the heuristic argument that two randomly selected word types should not be semantically similar (or dissimilar) in expectation.While this statement is attractive at first blush, some word types have more synonyms than others, so we argue that word types might not be evenly distributed in the semantic space.We instead show that zero-mean is helpful because it satisfies center-invariance, a necessary condition for orthogonal mappings.Second, Artetxe et al. (2016) attempt to enforce the two constraints by a single round of dimension-wise mean centering and length normalization.Unfortunately, this often fails to meet the constraints at the same time-length normalization can change the mean, and mean centering can change vector length.In contrast, Iterative Normalization simultaneously meets both constraints and is empirically better (Table 1) on dictionary induction.

Dictionary Induction Experiments
On a dictionary induction benchmark, we combine Iterative Normalization with three CLWE methods and show improvement in word translation accuracy across languages.

Dataset and Methods
We train and evaluate CLWE on MUSE dictionaries (Conneau et al., 2018) with default split.We align English embeddings to thirty-nine target language embeddings, pre-trained on Wikipedia with fastText (Bojanowski et al., 2017).The alignment matrices are trained from dictionaries of 5,000 source words.We report top-1 word translation accuracy for 1,500 source words, using crossdomain similarity local scaling (Conneau et al., 2018, CSLS).We experiment with the following CLWE methods. 4rocrustes Analysis.Our first algorithm uses Procrustes analysis (Schönemann, 1966) to find the orthogonal transformation that minimizes Equation 1, the total distance between translation pairs.Post-hoc Refinement.Orthogonal mappings can be improved with refinement steps (Artetxe et al., 2017;Conneau et al., 2018).After learning an initial mapping W 0 from the seed dictionary D, we build a synthetic dictionary D 1 by translating each word with W 0 .We then use the new dictionary D 1 to learn a new mapping W 1 and repeat the process.
Relaxed CSLS Loss (RCSLS).Joulin et al. (2018) optimize CSLS scores between translation pairs instead of Equation ( 1).RCSLS has state-ofthe-art supervised word translation accuracies on MUSE (Glavas et al., 2019).For the ease of optimization, RCSLS does not enforce the orthogonal constraint.Nevertheless, Iterative Normalization also improves its accuracy (Table 1), showing it can help linear non-orthogonal mappings too.

Training Details
We use the implementation from MUSE for Procrustes analysis and refinement (Conneau et al., 2018).We use five refinement steps.For RCSLS, we use the same hyperparameter selection strategy as Joulin et al. ( 2018)-we choose learning rate from {1, 10, 25, 50} and number of epochs from {10, 20} by validation.As recommended by Joulin et al. (2018), we turn off the spectral constraint.We use ten nearest neighbors when computing CSLS.

Translation Accuracy
For each method, we compare three normalization strategies: (1) no normalization, (2) dimensionwise mean centering followed by length normalization (Artetxe et al., 2016), and (3) five rounds of Iterative Normalization.Table 1 shows word translation accuracies on seven selected target languages.Results on other languages are in Appendix B.
As our theory predicts, Iterative Normalization increases translation accuracy for Procrustes analysis (with and without refinement) across languages.While centering and length-normalization also helps, the improvement is smaller, confirming that one round of normalization is insufficient.The largest margin is on English-Japanese, where Iterative Normalization increases test accuracy by more than 40%.Figure 1 shows an example of how Iterative Normalization makes the substructure of an English-Japanese translation pair more similar.
Surprisingly, normalization is even more important for RCSLS, a CLWE method without orthogonal constraint.RCSLS combined with Iterative Normalization has state-of-the-art accuracy, but RCSLS is much worse than Procrustes analysis on unnormalized embeddings, suggesting that length-invariance and center-invariance are also helpful for learning linear non-orthogonal mappings.(Finkelstein et al., 2002), MC (Miller and Charles, 1991), RG (Rubenstein and Goodenough, 1965), and YP-130 (Yang and Powers, 2006).The scores are similar, which shows that Iterative Normalization retains useful structures from the original embeddings.

Monolingual Word Similarity
Many trivial solutions satisfy both lengthinvariance and center-invariance; e.g., we can map half of words to e and the rest to −e, where e is any unit-length vector.A meaningful transformation should also preserve useful structure in the original embeddings.We confirm Iterative Normalization does not hurt scores on English word similarity benchmarks (Table 2), showing that Iterative Normalization produces meaningful representations.

Conclusion
We identify two conditions that make cross-lingual orthogonal mapping easier: length-invariance and center-invariance, and provide a simple algorithm that transforms monolingual embeddings to satisfy both conditions.Our method improves word translation accuracy of different mapping-based CLWE algorithms across languages.In the future, we will investigate whether our method helps other downstream tasks.

A Proof for Theorem 1
Our convergence analysis is based on a recent result on alternating non-convex projections.Theorem 1 in the work of Zhu and Li (2018) states that the convergence of alternating projection holds even if the constraint sets are non-convex, as long as the two constraint sets satisfy the following assumption: Assumption 1.Let X and Y be any two closed semi-algebraic sets, and let {(x k , y k )} be the sequence of iterates generated by the alternating projection method (e.g., Iterative Normalization).Assume the sequence {(x k , y k )} is bounded and the sets X and Y obey the following properties: (i) three-point property of Y: there exists a nonnegative function δ α : Y×Y → R with α > 0 such that for any k ≥ 1, we have (ii) local contraction property of X: there exist > 0 and β > 0 such that when y k − y k−1 2 ≤ , we have where P X is the projection onto X. Zhu and Li (2018) only consider sets of vectors, but our constraint are sets of matrices.For ease of exposition, we treat every embedding matrix X ∈ R d×n as a vector by concatenating the column vectors: The l 2 -norm of the concatenated vector X 2 is equivalent to the Frobenius norm of the original matrix X F .
The two operations in Iterative Normalization, Equation ( 5) and ( 6), are projections onto two constraint sets, unit-length set Y = X ∈ R d×n : ∀i, x i 2 = 1 and zero-mean set X = X ∈ R d×n : n i=1 x i = 0 .To prove convergence of Iterative Normalization, we show that Y satisfies the three-point property, and X satisfies the local contraction property.
Three-point property of Y.For any Y ∈ Y and X ∈ R n×d , let Y be the projection of X onto the constraint set Y with Equation (5).The columns of Y and Y have the same length, so we have Since Y is the projection of X onto the unit-length set with Equation (5); i.e., y i = x i / x i 2 , we can rewrite Equation ( 7).
x i 2 (2y i y i − 2y i y i ).
All columns of Y and Y are unit-length.Therefore, we can further rewrite Equation ( 8).
x i 2 y i − y i 2 2 .
Let l = min i { x i 2 } be the minimum length of the columns in X.We have the following inequality: From our non-zero assumption, the minimum column length l is always positive.Let l k be the minimum column length of the embedding matrix X (k) after the k-th iteration.It follows that Y satisfies the three-point property with α = min k {l k } and δ α (Y, Y ) = α Y − Y 2 2 .Local contraction property of X.The zeromean constraint set X is convex and closed: if two matrices X and Y both have zero-mean, their linear interpolation λX + (1 − λ)Y must also have zeromean for any 0 < λ < 1. Projections onto convex sets in a Hilbert space are contractive (Browder, 1967), and therefore X satisfies the local contraction property with any positive and β = 1.
In summary, the two constraint sets that Iterative Normalization projects onto satisfy Assumption 1.Therefore, Iterative Normalization converges following the analysis of Zhu and Li (2018).

B Results on All Languages
Table 3 shows word translation accuracies on all target languages.Iterative Normalization improves accuracy on all languages.

Table 1 :
Artetxe et al. (2016))malization's convergence (details in Appendix A).Word translation accuracy aligning English embeddings to seven languages.We combine three normalizations-no normalization (None), mean centering and length normalization (C+L), and Iterative Normalization (IN) for five rounds-with three CLWEs: Procrustes, Procrustes with refinement(Conneau et al., 2018), and RCSLS(Joulin et al., 2018).Procrustes with C+L is equivalent toArtetxe et al. (2016).The best result for each CLWE in each column in bold.Iterative Normalization has the best accuracy of the three normalization techniques.

Table 2 :
Correlations before and after applying Iterative Normalization on four English word similarity benchmarks: