Unsupervised Evaluation Metrics and Learning Criteria for Non-Parallel Textual Transfer

We consider the problem of automatically generating textual paraphrases with modified attributes or properties, focusing on the setting without parallel data (Hu et al., 2017; Shen et al., 2017). This setting poses challenges for evaluation. We show that the metric of post-transfer classification accuracy is insufficient on its own, and propose additional metrics based on semantic preservation and fluency as well as a way to combine them into a single overall score. We contribute new loss functions and training strategies to address the different metrics. Semantic preservation is addressed by adding a cyclic consistency loss and a loss based on paraphrase pairs, while fluency is improved by integrating losses based on style-specific language models. We experiment with a Yelp sentiment dataset and a new literature dataset that we propose, using multiple models that extend prior work (Shen et al., 2017). We demonstrate that our metrics correlate well with human judgments, at both the sentence-level and system-level. Automatic and manual evaluation also show large improvements over the baseline method of Shen et al. (2017). We hope that our proposed metrics can speed up system development for new textual transfer tasks while also encouraging the community to address our three complementary aspects of transfer quality.


Introduction
We consider textual transfer, which we define as the capability of generating textual paraphrases with modified attributes or stylistic properties, such as politeness (Sennrich et al., 2016a), sentiment (Hu et al., 2017;Shen et al., 2017), and formality (Rao and Tetreault, 2018). An effective transfer system could benefit a range of user- § Work completed while the author was a student at the University of Chicago and a visiting student at Toyota Technological Institute at Chicago. facing text generation applications such as dialogue (Ritter et al., 2011) and writing assistance (Heidorn, 2000). It can also improve NLP systems via data augmentation and domain adaptation.
However, one factor that makes textual transfer difficult is the lack of parallel corpora. Advances have been made in developing transfer methods that do not require parallel corpora (see Section 2), but issues remain with automatic evaluation metrics.  used crowdsourcing to obtain manually-written references and used BLEU (Papineni et al., 2002) to evaluate sentiment transfer. However, this approach is costly and difficult to scale for arbitrary textual transfer tasks.
Researchers have thus turned to unsupervised evaluation metrics that do not require references. The most widely-used unsupervised evaluation uses a pretrained style classifier and computes the fraction of times the classifier was convinced of transferred style (Shen et al., 2017). However, relying solely on this metric leads to models that completely distort the semantic content of the input sentence. Table 1 illustrates this tendency.
We address this deficiency by identifying two competing goals: preserving semantic content and producing fluent output. We contribute two corresponding metrics. Since the metrics are unsupervised, they can be used directly for tuning and model selection, even on test data. The three metric categories are complementary and help us avoid degenerate behavior in model selection. For particular applications, practitioners can choose the appropriate combination of our metrics to achieve the desired balance among transfer, semantic preservation, and fluency. It is often useful to summarize the three metrics into one number, which we discuss in Section 3.3.
We also add learning criteria to the framework of Shen et al. (2017) to accord with our new metrics. We encourage semantic preserva-tion by adding a "cyclic consistency" loss (to ensure that transfer is reversible) and a loss based on paraphrase pairs (to show the model examples of content-preserving transformations). To encourage fluent outputs, we add losses based on pretrained corpus-specific language models. We also experiment with multiple, complementary discriminators and find that they improve the trade-off between post-transfer accuracy and semantic preservation.
To demonstrate the effectiveness of our metrics, we experiment with textual transfer models discussed above, using both their Yelp polarity dataset and a new literature dataset that we propose. Across model variants, our metrics correlate well with human judgments, at both the sentencelevel and system-level.

Related Work
Textual Transfer Evaluation Recent work has included human evaluation of the three categories (post-transfer style accuracy, semantic preservation, fluency), but does not propose automatic evaluation metrics for all three Prabhumoye et al., 2018;Chen et al., 2018;. There have been recent proposals for supervised evaluation metrics , but these require annotation and are therefore unavailable for new textual transfer tasks. There is a great deal of recent work in textual transfer (Yang et al., 2018b;Santos et al., 2018;Logeswaran et al., 2018;Nikolov and Hahnloser, 2018), but all either lack certain categories of unsupervised metric or lack human validation of them, which we contribute. Moreover, the textual transfer community lacks discussion of early stopping criteria and methods of holistic model comparison. We propose a one-number summary for transfer quality, which can be used to select and compare models.
In contemporaneous work, Mir et al. (2019) similarly proposed three types of metrics for style transfer tasks. There are two main differences compared to our work: (1) They use a stylekeyword masking procedure before evaluating semantic similarity, which works on the Yelp dataset (the only dataset Mir et al. (2019) test on) but does not work on our Literature dataset or similarly complicated tasks, because the masking procedure goes against preserving content-specific nonstyle-related words. (2) They do not provide a way of aggregating three metrics for the purpose of model selection and overall comparison. We address these two problems, and we also propose metrics that are simple in addition to being effective, which is beneficial for ease of use and widespread adoption.
Textual Transfer Models In terms of generating the transferred sentences, to address the lack of parallel data, Hu et al. (2017) used variational autoencoders to generate content representations devoid of style, which can be converted to sentences with a specific style. Ficler and Goldberg (2017) used conditional language models to generate sentences where the desired content and style are conditioning contexts.  used a feature-based approach that deletes characteristic words from the original sentence, retrieves similar sentences in the target corpus, and generates based on the original sentence and the characteristic words from the retrieved sentences.  integrated reinforcement learning into the textual transfer problem. Another way to address the lack of parallel data is to use learning frameworks based on adversarial objectives (Goodfellow et al., 2014); several have done so for textual transfer (Yu et al., 2017;Li et al., 2017;Yang et al., 2018a;Shen et al., 2017;Fu et al., 2018). Recent work uses target-domain language models as discriminators to provide more stable feedback in learning (Yang et al., 2018b).
To preserve semantics more explicitly, Fu et al. (2018) use a multi-decoder model to learn content representations that do not reflect styles. Shetty et al. (2017) use a cycle constraint that penalizes L 1 distance between input and round-trip transfer reconstruction. Our cycle consistency loss is inspired by Shetty et al. (2017), together with the idea of back translation in unsupervised neural machine translation (Artetxe et al., 2017;Lample et al., 2017), and the idea of cycle constraints in image generation by Zhu et al. (2017).

Issues with Most Existing Methods
Prior work in automatic evaluation of textual transfer has focused on post-transfer classification accuracy ("Acc"), computed by using a pretrained classifier to measure classification accuracy of transferred texts (Hu et al., 2017;Shen et al., 2017). However, there is a problem with  relying solely on this metric. Table 1 shows examples of transferred sentences at several points in training the model of Shen et al. (2017). Acc is highest very early in training and decreases over time as the outputs become a stronger semantic match to the input, a trend we show in more detail in Section 6. Thus transfer quality is inversely proportional to semantic similarity to the input sentence, meaning that these metrics are complementary and difficult to optimize simultaneously. We also identify a third category of metric, namely fluency of the transferred sentence, and similarly find it to be complementary to the first two. These three metrics can be used to evaluate textual transfer systems and to do hyperparameter tuning and early stopping. In our experiments, we found that training typically converges to a point that gives poor Acc. Intermediate results are much better under a combination of all three unsupervised metrics. Stopping criteria are rarely discussed in prior work on textual transfer.

Unsupervised Evaluation Metrics
We now describe our proposals. We validate the metrics with human judgments in Section 6.3.

Post-transfer classification accuracy ("Acc"):
This metric was mentioned above. We use a CNN (Kim, 2014) trained to classify a sentence as being from X 0 or X 1 (two corpora corresponding to different styles or attributes). Then Acc is the percentage of transferred sentences that are classified as belonging to the transferred class.
Semantic Similarity ("Sim"): We compute semantic similarity between the input and transferred sentences. We embed sentences by averaging their word embeddings weighted by idf scores, where idf(q) = log(|C| · |{s ∈ C : q ∈ s}| −1 ) (q is a word, s is a sentence, C = X 0 ∪ X 1 ). We use 300-dimensional GloVe word embeddings (Pennington et al., 2014). Then, Sim is the average of the cosine similarities over all original/transferred sentence pairs. Though this metric is quite simple, we show empirically that it is effective in capturing semantic similarity. Simplicity in evaluation metrics is beneficial for computational efficiency and widespread adoption. The quality of transfer evaluations will be significantly boosted with even such a simple metric. We also experimented with METEOR (Denkowski and Lavie, 2014). However, given that we found it to be strongly correlated with Sim (shown in supplemental materials), we adopt Sim due to its computational efficiency and simplicity.
Different textual transfer tasks may require different degrees of semantic preservation. Our summary metric, described in Section 3.3, can be tailored by practitioners for various datasets and tasks which may require more or less weight on semantic preservation.
Fluency ("PP"): Transferred sentences can exhibit high Acc and Sim while still being ungrammatical. So we add a third unsupervised metric to target fluency. We compute perplexity ("PP") of the transferred corpus, using a language model pretrained on the concatenation of X 0 and X 1 . We note that perplexity is distinct from fluency. However, certain measures based on perplexity have been shown to correlate with sentence-level human fluency judgments (Gamon et al., 2005;Kann et al., 2018). Furthermore, as discussed in Section 3.3, we punish abnormally small perplexities, as transferred texts with such perplexities typically consist entirely of words and phrases that do not result in meaningful sentences. Our summary metric, described in Section 3.3, can be tailored by practitioners for various datasets and tasks which may require more or less weight on semantic preservation.

Summarizing Metrics into One Score
It is often useful to summarize multiple metrics into one number, for ease of tuning and model selection. To do so, we propose an adjusted geometric mean (GM) of a generated sentence q: where t = (t i ) i∈ [4] , and [·] + = max(·, 0). Note that as discussed above, we punish abnormally small perplexities by setting t 4 .
When choosing models, different practitioners may prefer different trade-offs of Acc, Sim, and PP. As one example, we provide a set of parameters based on our experiments: t = (63, 71, 97, −37). We sampled 300 pairs of transferred sentences from a range of models from our two different tasks (Yelp and literature) and asked annotators which of the two sentences is better. We denote a pair of sentences by (y + , y − ) where y + is preferred. We train the parameters t using the following loss: In future work, a richer function f (Acc, Sim, PP) could be learned from additional annotated data, and more diverse textual transfer tasks can be integrated into the parameter training.

Textual Transfer Models
The textual transfer systems introduced below are designed to target the metrics. These system variants are also used for metric evaluation. Note that each variant of the textual transfer system uses different components described below.
Our model is based on Shen et al. (2017). We define y ∈ R 200 and z ∈ R 500 to be latent style and content variables, respectively. X 0 and X 1 are two corpora containing sentences x (i) 0 and x (i) 1 respectively, where the word embeddings are in R 100 . We transfer using an encoder-decoder framework. The encoder E : X × Y → Z (where X , Y, Z are sentence domain, style space, and content space, respectively) is defined using an RNN with gated recurrent unit (GRU; Chung et al., 2014) cells. The decoder/generator G : Y × Z → X is defined also using a GRU RNN.
We use x to denote the style-transferred version of x. We want x Shen et al. (2017) used two families of losses for training: reconstruction and adversarial losses. The reconstruction loss solely helps the encoder and decoder work well at encoding and generating natural language, without any attempt at transfer:

Reconstruction and Adversarial Losses
The loss seeks to ensure that when a sentence x t is encoded to its content vector and then decoded to generate a sentence, the generated sentence should match x t . For their adversarial loss, Shen et al. (2017) used a pair of discriminators: D 0 tries to distinguish between x 0 and x 1 , and D 1 between x 1 and x 0 . In particular, decoder G's hidden states are aligned instead of output words.
where k is the size of a mini-batch. D t outputs the probability that its input is from style t where the classifiers are based on the convolutional neural network from Kim (2014). The CNNs use filter n-gram sizes of 3, 4, and 5, with 128 filters each. We obtain hidden states h by unfolding G from the initial state (y t , z (i) t ) and feeding in x (i) t . We obtain hidden states h by unfolding G from (y 1−t , z (i) t ) and feeding in the previous output probability distributions.

Cyclic Consistency Loss
We use a "cyclic consistency" loss (Zhu et al., 2017) to encourage already-transferred sentences to be able to be recovered by transferring back again. This loss is similar to L rec except we now transfer style twice in the loss. Recall that we seek to transfer x t to x t . After successful transfer, we expect x t to have style y 1−t , and x t (transferred back from x t ) to have style y t . We want x t to be very close to the original untransferred x t . The loss is defined as To use this loss, the first step is to transfer sentences x t from style t to 1 − t to get x t . The second step is to transfer x t of style 1 − t back to t so that we can compute the loss of the words in x t using probability distributions computed by the decoder. Backpropagation on the embedding, encoder, and decoder parameters will only be based on the second step, because the first step involves argmax operations which prevent backpropagation. Still, we find that the cyclic loss greatly improves semantic preservation during transfer.

Paraphrase Loss
While L rec provides the model with one way to preserve style (i.e., simply reproduce the input), the model does not see any examples of stylepreserving paraphrases. To address this, we add a paraphrase loss very similar to losses used in neural machine translation. We define the loss on a sentential paraphrase pair u, v and assume that u and v have the same style and content. The loss is the sum of token-level log losses for generating each word in v conditioned on the encoding of u: For paraphrase pairs, we use the ParaNMT-50M dataset (Wieting and Gimpel, 2018). 1

Language Modeling Loss
We attempt to improve fluency (our third metric) and assist transfer with a loss based on matching a pretrained language model for the target style. The loss is the cross entropy (CE) between the probability distribution from this language model and the distribution from the decoder: where l t,i and g t,i are distributions over the vocabulary defined as follows: where · stands for all words in the vocabulary built from the corpora. When transferring from style t to 1 − t, l t,i is the distribution under the language model p LM 1−t pretrained on sentences from style 1 − t and g t,i is the distribution under the decoder G. The two distributions l t,i and g t,i are over words at position i given the i − 1 words already predicted by the decoder. The two style-specific language models are pretrained on the corpora corresponding to the two styles. They are GRU RNNs with a dropout probability of 0.5, and they are kept fixed during the training of the transfer network.

Multiple Discriminators
Note that each of the textual transfer system variants uses different losses or components described 1 We first filter out sentence pairs where one sentence is the substring of another, and then randomly select 90K pairs. in this section. To create more variants, we add a second pair of discriminators, D 0 and D 1 , to the adversarial loss to address the possible mode collapse problem (Nguyen et al., 2017). In particular, we use CNNs with n-gram filter sizes of 3, 4, and 5 for D 0 and D 1 , and we use CNNs with n-gram sizes of 1, 2, and 3 for D 0 and D 1 . Also, for D 0 and D 1 , we use the Wasserstein GAN (WGAN) framework (Arjovsky et al., 2017). The adversarial loss takes the following form: is sampled for each training instance. The adversarial loss is based on Arjovsky et al. (2017), 2 with the exception that we use the hidden states of the decoder instead of word distributions as inputs to D t , similar to Eq. (3).
We choose WGAN in the hope that its differentiability properties can help avoid vanishing gradient and mode collapse problems. We expect the generator to receive helpful gradients even if the discriminators perform well. This approach leads to much better outputs, as shown below.

Datasets
Yelp sentiment. We use the same Yelp dataset as Shen et al. (2017), which uses corpora of positive and negative Yelp reviews. The goal of the transfer task is to generate rewritten sentences with similar content but inverted sentiment. We use the same train/development/test split as Shen et al. (2017). The dataset has 268K, 38K, 76K positive training, development, and test sentences, respectively, and 179K/25K/51K negative sentences. Like Shen et al. (2017), we only use sentences with 15 or fewer words.
Literature. We consider two corpora of literature. The first corpus contains works of Charles Dickens collected from Project Gutenberg. The second corpus is comprised of modern literature from the Toronto Books Corpus (Zhu et al., 2015). Sentences longer than 25 words are removed. Unlike the Yelp dataset, the two corpora have very different vocabularies. This dataset poses challenges for the textual transfer task, and it provides diverse data for assessing quality of our evaluation system. Given the different and sizable vocabulary, we preprocess by using the named entity recognizer in Stanford CoreNLP  to replace names and locations with -PERSON-and -LOCATION-tags, respectively. We also use byte-pair encoding (BPE), commonly used in generation tasks (Sennrich et al., 2016b). We only use sentences with lengths between 6 and 25. The resulting dataset has 156K, 5K, 5K Dickens training, development, and testing sentences, respectively, and 165K/5K/5K modern literature sentences.

Pretrained Evaluation Models
For the pretrained classifiers, the accuracies on the Yelp and Literature development sets are 0.974 and 0.933, respectively. For language models, the perplexities on the Yelp and Literature development sets are 27.4 and 40.8, respectively.
6 Results and Analysis 6.1 Analyzing Metric Relationships     Table 4: Manual evaluation results (%) using models from Table 2 (i.e., with roughly fixed Acc). > means "better than". ∆ Sim = Sim(A) − Sim(B), and ∆ PP = PP(A) − PP(B) (note that lower PP generally means better fluency). Each row uses at least 120 sentence pairs. A cell is bold if it represents a model win of at least 10%.
trends. The figures show trajectories of statistics on corpora transferred/generated from the dev set during learning. Each two consecutive markers deviate by half an epoch of training. Lowerleft markers generally precede upper-right ones.
In Figure 1(a), the plots of Sim by error rate (1 − Acc) exhibit positive slopes, meaning that error rate is positively correlated with Sim. Curves to the upper-left corner represent better trade-off between error rate and Sim. In the plots of PP by Sim in Figure 1(b), the M0 curve exhibits large positive slope but the curves for other models do not, which indicates that M0 sacrifices PP for Sim. Other models maintain consistent PP as Sim increases during training.

System-Level Validation
Annotators were shown the untransferred sentence, as well as sentences produced by two models (which we refer to as A and B). They were asked to judge which better reflects the target style (A, B, or tie), which has better semantic preservation of the original (A, B, or tie), and which is more fluent (A, B, or tie). Results are shown in Table 4. Overall, the results show the same trends as our automatic metrics. For example, on Yelp, large differences in human judgments of semantic preservation (M2>M0, M7>M0, M7>M2) also show the largest differences in Sim, while M6 and M7 have very similar human judgments and very similar Sim scores.

Sentence-Level Validation of Metrics
We describe a human sentence-level validation of our metrics in Table 5.
To validate Acc, human annotators were asked to judge the style of 100 transferred sentences (sampled equally from M0, M2, M6, M7). Note that it is a binary choice question (style 0 or style 1  without "tie" option) so that human annotators had to make a choice. We then compute the percentage of machine and human judgments that match. We validate Sim and PP by computing sentence-level Spearman's ρ between the metric and human judgments (an integer score from 1 to 4) on 150 generated sentences (sampled equally from M0, M2, M6, M7). We presented pairs of original sentences and transferred sentences to human annotators. They were asked to rate the level of semantic similarity (and similarly for fluency) where 1 means "extremely bad", 2 means "bad/ok/needs improvement", 3 means "good", and 4 means "very good." They were also given 5 examples for each rating (i.e., a total of 20 for four levels) before annotating. From Table 5, all validations show strong correlations on the Yelp dataset and reasonable correlations on Literature.
We validate GM by obtaining human pairwise preferences (without the "tie" option) of overall transfer quality and measuring the fraction of pairs in which the GM score agrees with the human preference. Out of 300 pairs (150 from each dataset), 258 (86%) match.
The transferred sentences used in the evaluation are sampled from the development sets produced by models M0, M2, M6, and M7, at the accuracy levels used in Table 2. In the data preparation for the manual annotation, there is sufficient randomization regarding model and textual transfer direction.

Comparing Losses
Cyclic Consistency Loss. We compare the trajectories of the baseline model (M0) and the +cyc model (M2). Table 2 and Figure 1 show that under similar Acc, M2 has much better semantic similarity for both Yelp and Literature. In fact, cyclic consistency loss proves to be the strongest driver of semantic preservation across all of our model configurations. The other losses do not constrain the semantic relationship across style transfer, so we include the cyclic loss in M3 to M7.
Paraphrase Loss. Table 2 shows that the model with paraphrase loss (M1) slightly improves Sim over M0 on both datasets under similar Acc. For Yelp, M1 has better Acc and PP than M0 at comparable semantic similarity. So, when used alone, the paraphrase loss helps. However, when combined with other losses (e.g., compare M2 to M4), its benefits are mixed. For Yelp, M4 is slightly better in preserving semantics and producing fluent output, but for Literature, M4 is slightly worse. A challenge in introducing an additional paraphrase dataset is that its notions of similarity may clash with those of content preservation in the transfer task. For Yelp, both corpora share a great deal of semantic content, but Literature shows systematic semantic differences even after preprocessing.
Language Modeling Loss. When comparing between M2 and M3, between M4 and M5, and between M6 and M7, we find that the addition of the language modeling loss reduces PP, sometimes at a slight cost of semantic preservation.

Results based on Supervised Evaluation
If we want to compare the models using one single number, GM is our unsupervised approach. We can also compute BLEU scores between our generated outputs and human-written gold standard outputs using the 1000 Yelp references from . For BLEU scores reported for the methods of , we use the values reported by Yang et al. (2018b). We use the same BLEU implementation as used by Yang et al. (2018b), i.e., multi-bleu.perl. We compare three models selected during training from each of our M6 and M7 settings. We also report posttransfer accuracies reported by prior work, as well  Table 6: Results on Yelp sentiment transfer, where BLEU is between 1000 transferred sentences and human references, and Acc is restricted to the same 1000 sentences. Our best models (right table) achieve higher BLEU than prior work at similar levels of Acc, but untransferred sentences achieve the highest BLEU. Acc * : the definition of Acc varies by row because of different classifiers in use. Other results from  are not included as they are worse. our own computed Acc scores for M0, M6, M7, and the untransferred sentences. Though the classifiers differ across models, their accuracy tends to be very high (> 0.97), making it possible to make rough comparisons of Acc across them.
BLEU scores and post-transfer accuracies are shown in Table 6. The most striking result is that untransferred sentences have the highest BLEU score by a large margin, suggesting that prior work for this task has not yet eclipsed the trivial baseline of returning the input sentence. However, at similar levels of Acc, our models have higher BLEU scores than prior work. We additionally find that supervised BLEU shows a trade-off with Acc: for a single model type, higher Acc generally corresponds to lower BLEU.

Conclusion
We proposed three kinds of metrics for nonparallel textual transfer, studied their relationships, and developed learning criteria to address them. We emphasize that all three metrics are needed to make meaningful comparisons among models. We expect our components to be applicable to a broad range of generation tasks.