Federated Learning of N-Gram Language Models

We propose algorithms to train production-quality n-gram language models using federated learning. Federated learning is a distributed computation platform that can be used to train global models for portable devices such as smart phones. Federated learning is especially relevant for applications handling privacy-sensitive data, such as virtual keyboards, because training is performed without the users’ data ever leaving their devices. While the principles of federated learning are fairly generic, its methodology assumes that the underlying models are neural networks. However, virtual keyboards are typically powered by n-gram language models for latency reasons. We propose to train a recurrent neural network language model using the decentralized FederatedAveraging algorithm and to approximate this federated model server-side with an n-gram model that can be deployed to devices for fast inference. Our technical contributions include ways of handling large vocabularies, algorithms to correct capitalization errors in user data, and efficient finite state transducer algorithms to convert word language models to word-piece language models and vice versa. The n-gram language models trained with federated learning are compared to n-grams trained with traditional server-based algorithms using A/B tests on tens of millions of users of a virtual keyboard. Results are presented for two languages, American English and Brazilian Portuguese. This work demonstrates that high-quality n-gram language models can be trained directly on client mobile devices without sensitive training data ever leaving the devices.

We propose to train a recurrent neural network language model using the decentralized FederatedAveraging algorithm and to approximate this federated model serverside with an n-gram model that can be deployed to devices for fast inference. Our technical contributions include ways of handling large vocabularies, algorithms to correct capitalization errors in user data, and efficient finite state transducer algorithms to convert word language models to word-piece language models and vice versa. The n-gram language models trained with federated learning are compared to n-grams trained with traditional server-based algorithms using A/B tests on tens of millions of users of a virtual keyboard. Results are presented for two languages, American English and Brazilian Portuguese. This work demonstrates that highquality n-gram language models can be trained directly on client mobile devices without sensitive training data ever leaving the devices. 1 Introduction

Virtual keyboard applications
Virtual keyboards for mobile devices provide a host of functionalities from decoding noisy spatial signals from tap and glide typing inputs to providing auto-corrections, word completions, and nextword predictions. These features must fit within tight RAM and CPU budgets, and operate under strict latency constraints. A key press should result in visible feedback within about 20 milliseconds Alsharif et al., 2015). Weighted finite-state transducers have been used successfully to decode keyboard spatial signals using a combination of spatial and language models Hellsten et al., 2017). Figure 1 shows the glide trails of two spatiallysimilar words. Because of the similarity of the two trails, the decoder must rely on the language model to discriminate between viable candidates. For memory and latency reasons, especially on low-end devices, the language models are typically based on n-grams and do not exceed ten megabytes. A language model (LM) is a probabilistic model on words. Given previous words x 1 , x 2 , . . . , x m−1 , an LM assigns a probability to the new words, i.e. p(x m |x m−1 , . . . , x 1 ). An n-gram LM is a Markovian distribution of order  (2017): (A) client devices compute SGD updates on locally-stored data, (B) a server aggregates the client updates to build a new global model, (C) the new model is sent back to clients, and the process is repeated.
where n is the order of the n-gram. For computation and memory efficiency, keyboard LMs typically have higher-order n-grams over a subset of the vocabulary, e.g. the most frequent 64K words, and the rest of the vocabulary only has unigrams. We consider n-gram LMs that do not exceed 1.5M n-grams and include fewer than 200K unigrams.
N-gram models are traditionally trained by applying a smoothing method to n-gram counts from a training corpus (Chen and Goodman, 1999). The highest quality n-gram models are trained over data that are well-matched to the desired output (Moore and Lewis, 2010). For virtual keyboards, training over users' typed text would lead to the best results. Of course, such data are very personal and need to be handled with care.

Federated learning
We propose to leverage Federated Learning (FL) (Konečnỳ et al., 2016;Konečný et al., 2016), a technique where machine learning models are trained in a decentralized manner on end-users' devices, so that raw data never leaves these devices. Only targeted and ephemeral parameter updates are aggregated on a centralized server. Figure 2 provides an illustration of the process. Federated learning for keyboard input was previously explored in Hard et al. (2018), in which a federated recurrent neural network (RNN) was trained for next-word prediction. However, latency constraints prevent the direct use of an RNN for decoding. To overcome this problem, we propose to derive an n-gram LM from a federated RNN LM model and use that n-gram LM for decoding. Specifically, the approximation algorithm is based on SampleApprox , which was recently proposed in Suresh et al. (2019a,b). The proposed approach has several advantages: Improved model quality: Since the RNN LM is trained directly on domain-matched user data, its predictions are more likely to match actual user behavior. In addition, as shown in Suresh et al. (2019a), an n-gram LM approximated from such an RNN LM is of higher quality than an n-gram LM trained on user data directly.

Minimum information transmission:
In FL, only the minimal information necessary for model training (the model parameter deltas) is transmitted to centralized servers. The model updates contain much less information than the complete training data.
Additional privacy-preserving techniques: FL can be further combined with privacy-preserving techniques such as secure multi-party computation (Bonawitz et al., 2017) and differential privacy Agarwal et al., 2018;Abadi et al., 2016). By the post-processing theorem, if we train a single differentially private recurrent model and use it to approximate n-gram models, all the distilled models will also be differentially private with the same parameters (Dwork et al., 2014).
For the above reasons, we have not proposed to learn n-gram models directly using FederatedAveraging of n-gram counts for all orders.

Outline
The paper is organized along the lines of challenges associated with converting RNN LMs to ngram LMs for virtual keyboards: the feasibility of training neural models with a large vocabulary, inconsistent capitalization in the training data, and data sparsity in morphologically rich languages. We elaborate on each of these challenges below.
Large vocabulary: Keyboard n-gram models are typically based on a carefully hand-curated vocabulary to eliminate misspellings, erroneous capitalizations, and other artifacts. The vocabulary size often numbers in the hundreds of thousands. However, training a neural model directly over the vocabulary is memory intensive as the embedding and softmax layers require space |V| × N e , where |V| is the vocabulary size and N e is the embedding dimension. We propose a way to handle large vocabularies for federated models in Section 3.
Incorrect capitalization: In virtual keyboards, users often type with incorrect casing (e.g. "She lives in new york" instead of "She lives in New York"). It would be desirable to decode with the correct capitalization even though the usertyped data may be incorrect. Before the discussion of capitalization, the SampleApprox algorithm is reviewed in Section 4. We then modify SampleApprox to infer capitalization in Section 5.
Language morphology: Many words are composed of root words and various morpheme components, e.g. "crazy", "crazily", and "craziness". These linguistic features are prominent in morphologically rich languages such as Russian. The presence of a large number of morphological variants increases the vocabulary size and data sparsity ultimately making it more difficult to train neural models. Algorithms to convert between word and word-piece models are discussed in Section 6.
Finally, we compare the performance of word and word-piece models and present the results of A/B experiments on real users of a virtual keyboard in Section 7.

Unigram distributions
Among the 200K words in the vocabulary, our virtual keyboard models only use the top 64K words in the higher-order n-grams. We train the neural models only on these most frequent words and train a separate unigram model over the entire vocabulary. We interpolate the two resulting models to obtain the final model for decoding.

Collection
Unigrams are collected via a modified version of the FederatedAveraging algorithm. No models are sent to client devices. Instead of returning gradients to the server, counting statistics are compiled on each device and returned. In our experiments, we aggregate over groups of approximately 500 devices per training round. We count a unigram distribution U from a whitelist vocabulary by U = i w i C i , where i is the index over devices, C i are the raw unigram counts collected from a single device i, and w i is a weight applied to device i.
To prevent users with large amounts of data (a) (b) Figure 3: Unigram distribution convergence. Note that by 3000 rounds, the unigram distribution is stable, but the model is still learning new tail unigrams.
from dominating the unigram distribution, we apply a form of L1-clipping: where λ is a threshold that caps each device's contribution. When λ = 1, L1-clipping is equivalent to equal weighting. The limit λ → ∞ is equivalent to collecting the true counts, since w i → 1.

Convergence
Convergence of the unigram distribution is measured using the unbiased chi-squared statistic (for simplicity, referred to as the Z-statistic) defined in Bhattacharya and Valiant (2015), the number of unique unigrams seen, and a moving average of the number of rounds needed to observe new unigrams. Figure 3(a) shows the overall distributional convergence based on the Z-statistic. At round k, unigram counts after k/2 and k rounds are compared. Figure 3(b) plots the number of whitelist vocabulary words seen and a moving average of the number of rounds containing new unigrams. New unigrams are determined by comparing a round k with all rounds through k−1 and noting if any new words are seen. The shaded bands range from the LM's unigram capacity to the size of the whitelist vocabulary.

Experiments
Since the whitelist vocabulary is uncased, capitalization normalization is applied based on an approach similar to Section 5. We then replace the unigram part of an n-gram model with this distribution to produce the final LM.
In A/B experiments, unigram models with different L1-clipping thresholds are compared against a baseline unigram model gathered from centralized log data. Results are presented in Table 1. Accuracy is unchanged and OOV rate is improved at λ = 1 and λ = 1K.
Before we discuss methods to address inconsistent capitalization and data sparsity in morphologically rich languages, we review SampleApprox . SampleApprox , proposed in Suresh et al. (2019a,b), can be used to approximate a RNN as a weighted finite automaton such as an n-gram model. A weighted finite automaton (WFA) A = (Σ, Q, E, i, F ) over R + (probabilities) is given by a finite alphabet Σ (vocabulary words), a finite set of states Q (n-gram contexts), an initial state i ∈ Q (sentence start state), a set of final states F ∈ Q (sentence end states), and a set of labeled transitions E and associated weights that represent the conditional probability of labels (from Σ) given the state (list of n-grams and their probabilities). WFA models allow a special backoff label ϕ for succinct representation as follows. Let L[q] be the set of labels on transitions from state q. For x ∈ L[q], let w q [x], be the weight of the transition of x at state q and d q [x] be the destination state. For a label x and a state q,

Review of SampleApprox
In other words, ϕ is followed if x / ∈ L[q]. The definition above is consistent with that of backoff n-gram models (Chen and Goodman, 1999). Let B(q) denote the set of states from which q can be reached by a path of backoff labels and let q[x] be the first state at which label x can be read by following a backoff path from q.
Given an unweighted finite automaton A and a neural model, SampleApprox finds the proba-bility model on A that minimizes the Kullback-Leibler (KL) divergence between the neural model and the WFA. The algorithm has two steps: a counting step and a KL minimization step. For the counting step, letx(1),x(2), . . . ,x(k) be k independent samples from the neural model. For a sequencex, let x i denote the i th label andx i = x 1 , x 2 , . . . , x i denote the first i labels. For every q ∈ Q and x ∈ Σ, the algorithm computes C(x, q) given by We illustrate this counting with an example. Suppose we are interested in the count of the bi-gram New York.
Given a bi-gram LM, SampleApprox generates m sentences and computes In other words, it finds all sentences that have the word New, observes how frequently York appears subsequently, and computes the conditional probability. After counting, it uses a difference of convex (DC) programming based algorithm to find the KL minimum solution. If is the average number of words per sentence, the computational complexity of counting isÕ(k · · |Σ|) 1 and the computational complexity of the KL minimization is O(|E| + |Q|) per iteration of DC programming.

Capitalization
As mentioned in Section 2, users often type with incorrect capitalization. One way of handling incorrect capitalization is to store an on-device capitalization normalizer (Beaufays and Strope, 2013) to correctly capitalize sentences before using them to train the neural model. However, capitalization normalizers have large memory footprints and are not suitable for on-device applications. To overcome this, the neural model is first trained on uncased user data. SampleApprox is then modified to approximate cased n-gram models from uncased neural models.
As before, letx(1),x(2), . . . ,x(k) be k independent (uncased) samples from the neural model. We capitalize them correctly at the server using Beaufays and Strope (2013). Let y(1),ȳ(2), . . .ȳ(k) represent the corresponding k correctly capitalized samples. Let p cap be another probability model on non-user data that approximates the ratio of uncased to cased probabilities given a context. Given a label y, let u(y) be the uncased symbol. For example, if y is York, then u(y) is york. With the above definitions, we modify the counting step of SampleApprox as follows: wherep(y|ȳ i (j)) is given by .

We refer to this modified algorithm as
CapSampleApprox .
We note that wordpiece to word approximation incurs an additional computation cost ofÕ((|E| + |Q| + |∆|) ), where ∆ is the number of words, E and Q are the set of arcs and set of states in the word n-gram model, and is the maximum number of word-pieces per word.

Morphologically rich languages
To train neural models on morphologically rich languages, subword segments such as byte-pair encodings or word-pieces (Shibata et al., 1999;Schuster and Nakajima, 2012;Kudo, 2018) are typically used. This approach assigns conditional probabilities to subword segments, conditioned on prior subword segments. It has proved successful in the context of speech recognition (Chiu et al., 2018) and machine translation (Wu et al., 2016). Following these successes, we propose to train RNN LMs with word-pieces for morphologically rich languages.
We apply the word-piece approach of Kudo (2018), which computes a word-piece unigram LM using a word-piece inventory V P . Each wordpiece x i ∈ V P is associated with a unigram probability p(x i ). For a given word y and its possible segmentation candidates, the word is encoded with the segmentation that assigns the highest probability.
Throughout this paper we apply 4K, 16K, and 30K as the word-piece inventory sizes. These values lie within a range that provides good trade-off between the LSTM embedding size and the richness of the language morphology. We apply 100% character coverage to include all the symbols that appeared in the unigram distribution (Section 3), including the common English letters, accented letters e.g.é,ô, and digits. Accented letters are important for languages like Portuguese. For fast decoding, the n-gram models still need to be at the word-level, since word-piece n-gram models increase the depth of the beam-search during decoding. We convert the word n-gram topology to an equivalent word-piece WFA topology and use SampleApprox to approximate the neural wordpiece model on the word-piece WFA topology. We then convert the resulting word-piece WFA LM to the equivalent n-gram LM. The remainder of this section outlines efficient algorithms for converting between word and word-piece WFA models.
A natural way to represent the transduction from word-piece sequences to word sequences is with a finite-state transducer. Given the properties of our word-piece representation, that transducer can be made sequential (i.e., input deterministic).
A sequential weighted finite-state transducer (WFST) is a deterministic WFA where each transition has an output label in addition to its (input) label and weight. We will denote by o q [x] the output label of the transition at state q with input label x, o q [x] ∈ ∆ ∪ { }, where ∆ denotes the output alphabet of the transducer and the empty string/sequence.
Let M be the minimal sequential (unweighted) finite-state transducer (FST) lexicon from wordpiece sequences in Σ * to word sequences in ∆ * , where Σ denotes our word-piece inventory, ∆ denotes our vocabulary, and * is Kleene closure. A word-piece topology B equivalent to the word Algorithm 1 Approximating a Neural Model as an N-Gram with a Supplemental Topology.
Train R u W , R u P with FederatedAveraging a Train AW from supplemental corpus C AW e ,AW i ,AW m ,AW r ← Gen(R u W , AW , ø, NN2WFA W) AP e ,AP i ,AP m ,AP r ← Gen(R u P , AW , AW i , NN2WFA P) a T denotes an unweighted topology and A denotes the weighted n-gram model. Superscript u represents uncased models.
topology A can be obtained by composing the word-piece-to-word transducer M with A: Since A has backoff transitions, the generic composition algorithm of (Allauzen et al., 2011) is used with a custom composition filter that ensures the result, B, is deterministic with a well-formed backoff structure, and hence is suitable for the counting step of SampleApprox . We give an explicit description of the construction of B, from which readers familiar with Allauzen et al. (2011) can infer the form of the custom composition filter.
The states in B are pairs (q 1 , q 2 ), with q 1 ∈ Q M and q 2 in Q A , initial state i B = (i M , i A ), and final state f B = (f M , f A ). Given a state (q 1 , q 2 ) ∈ Q B , the outgoing transitions and their destination states are defined as follows. If x ∈ L[q 1 ], then an xlabeled transition is created if one of two conditions holds: where R[q] denotes the set of output non-labels that can be emitted after following an outputpath from q. Finally if ϕ ∈ L[q 1 ], a backoff transition is created: The counting step of SampleApprox is applied to B, and transfers the computed counts from B to A by relying on the following key property of M . For every word y in ∆, there exists a unique state q y ∈ Q M and unique word-piece x y in Σ such that o qy [x y ] = y. This allows us to transfer the counts from B to A as follows: The KL minimization step of SampleApprox to A is applied subsequently.
As an alternative, the unweighted word automaton A could be used to perform the counting step directly. Each samplex(j) could be mapped to a corresponding word sequenceȳ(j), mapping out-of-vocabulary word-piece sequences to an unknown token. However, the counting steps would have become much more computationally expensive, since p nn (y|ȳ i (j)) would have to be evaluated for all i, j and for all words y in the vocabulary, where p nn is now a word-piece RNN. 7 Experiments 7.1 Neural language model LSTM models (Hochreiter and Schmidhuber, 1997) have been successfully used in a variety of sequence processing tasks. LSTM models usually have a large number of parameters and are not suitable for on-device learning. In this work, we use various techniques to reduce the memory footprint and to improve model performance.
We use a variant of LSTM with a Coupled Input and Forget Gate (CIFG) (Greff et al., 2017) Table 2: Parameters for neural language models. W and P refer to word and word-piece models, respectively. N l , N h , N e , S e and S total refer to the number of LSTM layers, the number of hidden states in LSTM, the embedding dimension size, the number of parameters in the embedding layer and in total, respectively. The suffixes "S" and "L" indicate small and large models. "G" represents GLSTM. The suffixes 4K, 16K and 30K represent the vocabulary sizes.
the forget and input decisions together, which reduces the number of LSTM parameters by 25%.
We also use group-LSTM (GLSTM) (Kuchaiev and Ginsburg, 2017) to reduce the number of trainable variables of an LSTM matrix by the number of feature groups, k. We set k = 5 in experiments. Table 2 lists the parameter settings of the word (W) and word-piece (P) models used in this study. Due to the memory limitations of on-device training, all models use fewer than 3.5M parameters. For each vocabulary size, we first start with a base architecture consisting of one LSTM layer, a 96-dimensional embedding, and 670 hidden state units. We then attempt to increase the representational power of the LSTM cell by increasing the number of hidden units and using multi-layer LSTM cells (Sutskever et al., 2014). Residual LSTM (Kim et al., 2017) and layer normalization (Lei Ba et al., 2016) are used throughout experiments, as these techniques were observed to improve convergence. To avoid the restriction that N h = N e in the output, we apply a projection step at the output gate of the LSTM (Sak et al., 2014). This step reduces the dimension of the LSTM hidden state from N h to N e . We also share the embedding matrix between the input embedding and output softmax layer, which reduces the memory requirement by |V| × N e . We note that other recurrent neural models such as gated recurrent units (Chung et al., 2014) can also be used instead of CIFG LSTMs.
The federated RNN LMs are trained on two language settings of the virtual keyboard: Amer-ican English (en US) and Brazilian Portuguese (pt BR). Following , 500 reporting clients are used to compute the gradient updates for each round. A server-side learning rate of 1.0, a client-side learning rate of 0.5, and Nesterov momentum of 0.9 are used. Both the word and word-piece models are trained over the same time range and with the same hyperparameters. Prior to federated training of the RNN LM, the word-piece inventory is constructed from the unigram distribution collected via the federated approach introduced in Section 3.
A common evaluation metric for both word and word-piece models is desirable during federated training. Such a metric can be used to monitor the training status and select models to be used for the CapSampleApprox algorithm. Neither crossentropy nor accuracy serves this need due to the mismatch in vocabularies used. Word-level accuracy is hard to compute for the word-piece model, since it requires hundreds of inference calls to traverse all combinations of a word from the wordpiece vocabulary. In this study, we apply sentence log likelihood (SLL) in the evaluation. Given a sentencex m = {x 1 , x 2 , . . . , x m } composed of m units (either words or word-pieces), SLL is evaluated as m i=1 log(p nn (x i |x i−1 )). One issue that arises is the handling of out-of-vocabulary (OOV) words. The OOV probability of the word model is about 8%. The comparable probability of an OOV word (according to V) for word-piece models is the product of the corresponding word-piece conditional probabilities, which is much smaller than 8%. To mitigate this issue, we define SLL excluding OOV as: where the OOV in the equation includes wordpieces that are components of OOV words. In the following, SLL e is used as model selection metric.

Approximated n-gram model
Algorithm 1 illustrates the workflow we use to generate different n-gram models for evaluation. Recall that CapSampleApprox takes a RNN LM, an n-gram topology, and a reweighting FST for capitalization normalization. The n-gram topology is empty under self-inference mode. Suresh et al. (2019a) showed that inferring topology from the RNN LM does not perform as well as   using the true n-gram topology obtained from the training corpus. Hence, we supplement the neuralinferred topology with the topology obtained by a large external large corpus denoted by A W . We use CapSampleApprox on four topologies and compare the resulting models: an n-gram model obtained from an external corpus's topology A e , an n-gram model obtained from a neural inferred topology A i , an n-gram model obtained by interpolating (merging) the two models above A m , and an n-gram model obtained by approximating on the interpolated topology A r . We repeat this experiment for both word and word-piece RNN LMs and use subscripts W and P , respectively. We evaluate all eight produced n-gram models directly on the traffic of a production virtual keyboard, where prediction accuracy is evaluated over user-typed words. Figure 5 shows the SLL e metric for all the experiments listed in  total numbers of parameters, 4K word-piece models are shown to be superior to 16K and 30K. For 4K word-piece models, GLSTM is in general onpar with its P 4K-L counterpart. The word model is better than all the word-piece models in both languages in SLL e . We were surprised by this result, and hypothesize that it is due to the SLL e metric discounting word-piece models' ability to model the semantics of OOV words. The solid lines are the best models we pick for A/B experiment evaluation for the virtual keyboard (P 4K-L and W 30K ). Table 3 shows the A/B evaluation result on both en US and pt BR populations. The baseline model is an n-gram model trained directly from centralized logs. All of the federated trained models perform better than the baseline model. We repeated the A/B evaluation with word-piece models on en US and the results are in Table 4. The performance of word-piece models is similar to that of word models. Among the federated models for en US, A Pr has the best result. This meets our expectation that the supplemental corpus helps improve the performance of the topology inferred from the RNN LM.

Conclusion
We have proposed methods to train productionquality n-gram language models using federated learning, which allows training models without user-typed text ever leaving devices. The proposed methods are shown to perform better than traditional server-based algorithms in A/B experiments on real users of a virtual keyboard.