Generating Long and Informative Reviews with Aspect-Aware Coarse-to-Fine Decoding

Generating long and informative review text is a challenging natural language generation task. Previous work focuses on word-level generation, neglecting the importance of topical and syntactic characteristics from natural languages. In this paper, we propose a novel review generation model by characterizing an elaborately designed aspect-aware coarse-to-fine generation process. First, we model the aspect transitions to capture the overall content flow. Then, to generate a sentence, an aspect-aware sketch will be predicted using an aspect-aware decoder. Finally, another decoder fills in the semantic slots by generating corresponding words. Our approach is able to jointly utilize aspect semantics, syntactic sketch, and context information. Extensive experiments results have demonstrated the effectiveness of the proposed model.


Introduction
In the past decades, online review services (e.g., AMAZON and YELP) have been an important kind of information platforms where users post their feedbacks or comments about products (Kim et al., 2016). Usually, writing an informative and wellstructured review will require considerable efforts by users. To assist the writing process, the task of review generation has been proposed to automatically generate review text for a user given a product and her/his rating on it (Tang et al., 2016;Zhou et al., 2017).
In the literature, various methods have been developed for review generation (Tang et al., 2016;Zhou et al., 2017;Ni et al., 2017;Catherine and Cohen, 2018). Most of these methods adopt Recurrent Neural Networks (RNN) based methods, especially * Corresponding author the improved variants of Long-Short Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) and Gated Recurrent Unit (GRU) (Cho et al., 2014). They fulfill the review generation task by performing the decoding conditioned on useful context information. Usually, an informative review is likely to consist of multiple sentences, containing substantive comments from users. Hence, a major problem of existing RNN-based methods is that they have limited capacities in producing long and informative text. More recently, Generative Adversarial Net (GAN) based methods (Zang and Wan, 2017;Yu et al., 2017;Xu et al., 2018a) have been proposed to enhance the generation of long, diverse and novel text. However, they still focus on word-level generation, and neglect the importance of topical and syntactic characteristics from natural languages.
As found in the literature of linguistics (Pullum, 2010) and writing (Bateman and Zock, 2003), the writing process itself has involved multiple stages focusing on different levels of goals. We argue that an ideal review generation approach should follow the writing procedure of a real user and capture rich characteristics from natural language. With this motivation, we design an elaborative coarseto-fine generation process by considering the aspect semantics and syntactic characteristics. Figure 1 presents an illustrative example for our review generation process. First, we conceive the content flow that is characterized as an aspect sequence. An aspect describes some property or attribute about a product (Zhao et al., 2010), such as sound and service in this example. To generate a sentence, we further create a sentence skeleton containing semantic slots given the aspect semantics. The semantic slots denote the placeholders for useful syntactic information (e.g., Part-ofspeech tags). Finally, the semantic slots are filled with the generated words. The process is repeated Figure 1. An illustrative example for our generation process. We select a sample review on AMAZON. The aspect labels and sketches are manually created for explaining our idea, which will be learned by our model. until all sentences are generated.
Based on such a generation process, in this paper, we propose a novel aspect-aware coarse-tofine decoder for generating product reviews. We first utilize unsupervised topic models to extract aspects and tag review sentences with aspect labels. We develop an attention-based RNN decoder to generate the aspect sequence conditioned on the context including users, items and ratings. By modeling the transitions of aspect semantics among sentences, we are able to capture the content flow of the whole review. Then, we generate a semantic template called sketch using an aspect-aware decoder, which represents the sentence skeleton. Finally, we generate the word content according to an informed decoder that considers aspect labels, sketch symbols and previously decoded words. Extensive experiments on three real-world review datasets have demonstrated the effectiveness of the proposed model.
To our knowledge, it is the first review generation model that is able to jointly utilize aspect semantics, syntactic sketch, and context information. We decompose the entire generation process into three stages. In this way, the generation of long review text becomes more controllable, since we consider a simpler sequence generation task at each stage. Furthermore, we incorporate language characteristics (e.g., Part-of-Speech tags and ngrams) into the aspect-aware decoder to instruct the generation of well-structured text.

Related Work
In recent years, researchers have made great progress in natural language generation (NLG) Zhou et al., 2018;. As a special NLG task, automatic re-view generation has been proposed to assist the writing of online reviews for users. RNN-based methods have been proposed to generate the review content conditioned on useful context information (Tang et al., 2016;Zhou et al., 2017). Especially, the task of review generation is closely related to the studies in recommender systems that aim to predict the preference of a user over products. Hence, several studies propose to couple the solutions of the two lines of research work, and utilize the user-product interactions for improving the review generation (Ni et al., 2017;Catherine and Cohen, 2018;Ni and McAuley, 2018). Although Ni and McAuley (2018) have explored aspect information to some extent, they characterize the generation process in a single stage and do not perform the coarse-tofine decoding. Besides, the aspect transition patterns have been not modeled.
It has been found that RNN models tend to generate short, repetitive, and dull texts Luo et al., 2018). For addressing this issue, Generative Adversarial Nets (GAN) based approaches have been recently proposed to generate long, diverse and novel text (Zang and Wan, 2017;Yu et al., 2017;Xu et al., 2018a). These methods usually utilize reinforcement learning techniques to deal with the generation of discrete symbols. However, they seldom consider the linguistic information from natural languages, which cannot fully address the difficulties of our task.
Our work is inspired by the work of using sketches as intermediate representations (Dong and Lapata, 2018;Wiseman et al., 2018;Xu et al., 2018b;. These works usually focus on sentence-or utterance-level generation tasks, in which global aspect semantics and transitions have not been considered. Our work is also related to review data mining, especially the studies on topic or aspect extraction from review data (Qiu et al., 2017;Zhao et al., 2010).

Problem Formulation
A review is a natural language text written by a user u on a product (or item) i with a rating score of r. Let V denote the vocabulary and y 1:m = { y j,1 , · · · , y j,t , · · · , y j,n j } m j=1 denote a review text consisting of m sentences, where y j,t ∈ V denotes the t-th word of the j-th review sentence and n j is the length of the j-th sentence.
We assume that the review generation process is decomposed into three different stages. First, a user generates an aspect sequence representing the major content flow for a review. To generate a sentence, we predict an aspect-aware sketch conditioned on an aspect label. Finally, based on the aspect label and the sketch, we generate the word content for a sentence. The process is repeated until all the sentences are generated.
Let A denote a set of A aspects in our collection. Following (Zhao et al., 2010), we assume each review sentence is associated with an aspect label, describing some property or attribute about a product or an item. We derive an aspect sequence for a review text, denoted by a 1:m = a 1 , · · · , a j , · · · , a m , where a j ∈ A is the aspect label (or ID) of the j-th sentence. For each sentence, we assume that it is written according to some semantic sketch, which is also denoted by a symbol sequence. Let s 1:m = { s j,1 , · · · , s j,t , · · · , s j,n j } m j=1 , where n j is the length of the j-th sketch, and s j,t is the t-th token of the j-th sketch denoting a word, a Part-of-Speech tag, a bi-gram, etc.
Based on the above notations, we are ready to define our task. Given user u, item i and the rating score r, we aim to automatically generate a review that is able to maximize the joint probability of the aspects, sketches and words where c = {u, i, r} denotes the set of available context information. Note that, in training, we have aspects and sketches available, and learn the model parameters by optimizing the joint probability in Eq. 1 over all the seen reviews. While, for test, the aspects and sketches are unknown. We need to first infer an aspect sequence and then predict the corresponding sketch for each sentence. Finally, we generate the review content based on the predicted aspect and sketch information.

The Proposed Approach
Unlike previous works generating the review in a single stage, we decompose the generation pro-  Figure 2. The overview of the proposed review generation model with the example of "the vocals are pretty well". The predicted aspect label is sound, and the generated sketch is "the NN are pretty_well".
cess into three stages, namely aspect sequence generation, aspect-aware sketch generation and sketch-based sentence generation. We present an overview illustration of the proposed model in Fig. 2. Next we describe each part in detail.

Aspect Sequence Generation
To learn the model for generating aspect sequences, we need to derive the aspect sequence for training, and then decode the aspect sequence based on the context encoder.
Aspect Extraction. Aspects provide an informative summary about the feature or attribute information about a product or an item. For example, aspects of a restaurant may include food, staff and price, etc. It is time-consuming and laborious to manually discover the aspects from texts. Here, we use an automatic unsupervised topic modeling approach to learning the aspects from the review content. Based on the Twitter-LDA model (Zhao et al., 2011), we treat a review as a document consisting of multiple sentences. Each document is associated with a distribution over the aspects. When generating a sentence, an aspect label (or ID) is first sampled according to the document's distribution over the aspects. Then, the entire sentence is generated according to the word distribution conditioned on the aspect label. To purify the aspect words, we further incorporate a background language model to absorb background words. When topic models have been learned, we can derive a set of A aspect-specific word distributions, denoted by {θ a · }, where θ a w denotes the probability of a word w from the vocabulary V in aspect a.
Context Encoder. Our aspect generation module adopts an encoder-decoder architecture. We first develop the context encoder based on the information of user u, item i and rating score r. We first use a look-up layer to transform the three kinds of information into low-dimensional vectors.
denote the embeddings for u, i and r respectively. Then, we feed the concatenated vector into a Multi-Layer Perceptron (MLP) and produce a single vectorized representation v c ∈ R d C : ( The embedding v c summarizes the necessary information from the three kinds of context data. It is flexible to incorporate more kinds of useful information using a similar approach. where v a j−1 ∈ R d A is the embedding of the previous aspect label a j−1 . The hidden vector of the first time step is initialized by the encoding vector h A 0 = v c in Eq. 2. Then, RNNs recurrently compute hidden vectors, and predict the next aspect label (or ID) a j . Additionally, we use an attention mechanism (Luong et al., 2015) to enhance the effect of context information. We compute the attention score of context c k for the current time step of the decoder via: where W 1 is the parameter matrix to learn, and the attention vectorc t is obtained by: Finally, we compute the probability of the j-th aspect label p(a t |a <j , c) via: where W 2 , W 3 , W 4 and b 1 are learnable parameter matrices or vector.

Aspect-Aware Sketch Generation
A sketch is a symbol sequence describing the skeleton of a sentence, where each symbol denotes a semantic symbol such as a POS tag or a bi-gram. Similar to the aspect decoder, we also use the GRU-based RNNs to implement the sketch decoder. As shown in Fig. 1, the sketches w.r.t. varying aspects are likely to be different. Hence, we need to consider the effect of aspect information in the generation of a sketch. Let h S j,t ∈ R d H S denote a d H S -dimensional hidden vector at time step t for the j-th sketch, which is computed via: where x S j,t is further defined as where v s j,t−1 ∈ R d S denotes the embedding for the previous sketch symbol s j,t−1 , v a j denotes the embedding of the current aspect, and " " denotes the element-wise product. In this way, the aspect information can be utilized at each time step for generating an entire sketch. We set the initial hidden vector for the j-th sketch as the last embedding of the previous sketch: Specifically, we have h S 1,0 = v c for initialization. Similar to Eq. 4 and 5, we can further use an attention mechanism for incorporating context information, and produce a context-enhanced sketch representationh S j,t for time step t. Finally, we compute Pr(s j,t |s j,<t , a j , c) via: Pr(sj,t|sj,<t, aj, c where we incorporate the embedding v a j of the aspect a j for enhancing the aspect semantics.

Sketch-based Review Generation
When the aspect sequence and the sketches are learned, we can generate the word content of a review. Here, we focus on the generation process of a single sentence.
Sketch Encoder. To encode the sketch information, we employ the a bi-directional GRU encoder (Schuster and Paliwal, 1997;Cho et al., 2014) to encode the sketch sequence s j,1:n j into a list of hidden vectors { ← → h S j,t } n j t=1 , where ← → h S j,t denotes the hidden vector for the t-th position in the j-th sketch at time step t from the encoder. Different from Eq. 8, we use a bi-directional encoder since the sketch is available at this stage, capturing the global information from the entire sketch.
Sentence Decoder. Consider the word generation at time step t. Let v y j,t−1 ∈ R d Y denotes the embedding of the previous word y j,t−1 . As input, we concatenate the current sketch representation and the embedding of the previous word where "⊕" denotes the vector concatenation. Then, we compute the hidden vector h Y j,t ∈ R d H Y for the j-th sentence via: Similar to Eq. 4 and 5, we further leverage the context to obtain an enhanced state representation denoted byh Y j,t using the attention mechanism. Then we transform it into an intermediate vector with the dimensionality of the vocabulary size: where v s j,t is the embedding of the sketch symbol s j,t . By incorporating aspect-specific word distributions, we can apply the softmax function to derive the generative probability of the t-th word Pr(yj,t|yj,<t, s j,1:n j , aj, c) = softmax(zy j,t + θ a j y j,t ), where θ a j y j,t is the probability from the word distribution for aspect a j . Here, we boost the importance of the words which have large probabilities in the corresponding topic models. In this process, the generation of words is required to match the generation of sketch symbols slot by slot. Here, we align words and sketch symbols by using the same indices for each slot for ease of understanding. However, the length of the sketch is not necessarily equal to that of the generated sentence, since a sketch symbol can correspond to a multiterm phrase. When the sketch token is a term or a phrase (e.g., bi-grams), we directly copy the original terms or phases to the output slot(s).

Training and Inference
Integrating Eq. 6, 10 and 14 into Eq. 1, we derive the joint model for review generation. We take the log likelihood of Eq. 1 over all training reviews as the objective function. The joint objective function is difficult to be directly optimized. Hence, we incrementally train the three parts, and fine-tune the shared or dependent parameters in different modules with the joint objective. For training, we directly use the real aspects and sketches for learning the model parameters. For inference, we apply our model in a pipeline way: we first infer the aspect, then predict the sketches and finally generate the words using inferred aspects and sketches. During inference, for sequence generation, we apply the beam search method with beam size 4. In the three sequence generation modules of our model, we incorporate two special symbols to indicate the start and end of a sequence, namely START and END. Once we generate the END symbol, the generation process will be stopped. Besides, we set the maximum generation lengths for aspect sequence and sketch sequence to be 5 and 50, respectively. In the training procedure, we adopt the Adam optimizer (Kingma and Ba, 2014). In order to avoid overfitting, we adopt the dropout strategy with a rate of 0.2. More implementation details can be found in Section 5.1 (see Table 2).

Experiments
In this section, we first set up the experiments, and then report the results and analysis.

Experimental Setup
Datasets. We evaluate our model on three real-world review datasets, including AMA-ZON Electronic dataset (He and McAuley, 2016), YELP Restaurant dataset 1 , and RATEBEER dataset (McAuley et al., 2012). We convert all text into lowercase, and perform tokenization using NLTK 2 . We keep the words occurring at least ten times as vocabulary words. We discard reviews with more than 100 tokens, and remove users and products (or items) occurring fewer than five times. The reviews of each dataset are randomly split into training, validation and test sets (80%/10%/10%). The detailed statistics of the three datasets are summarized in Table 1. Aspect and Sketch Extraction. After the preprocessing, we use the Twitter-LDA model in (Zhao et al., 2011) for automatically learning the aspects and aspect keywords. The numbers of aspects are set to 10, 5, and 5 for the three datasets, respectively. The aspect numbers are selected using the perplexity score on validation set. By inspecting into the top aspect words, we find the learned aspects are very coherent and meaningful. For convenience, we ask a human labeler to annotate each learned aspect from topic models with an aspect label. Note that aspect labels are only for ease of presentation, and will not be used in our model. With topic models, we further tag each sentence with the aspect label which gives the maximum posterior probability conditioned on the words. To derive the sketches, we first extract the most popular 200 bi-grams and tri-grams by frequency. We replace their occurrences with n-gram IDs. Furthermore, we keep the words ranked in top 50 positions of an aspect, and replace the occurrences of the rest words with their Part-of-Speech tags. We also keep the top 50 frequent words in the entire text collection, such as background words "I" and "am". In this way, for each review, we obtain a sequence of aspect labels; for each sentence in the review, we obtain a sequence of sketch symbols. Aspect sequences and sketch sequences are only available during the training process.
Baseline Models. We compare our model against a number of baseline models: • gC2S (Tang et al., 2016): It adopts an encoder-decoder architecture to generate review texts conditioned on context information through a gating mechanism.
• Attr2Seq (Zhou et al., 2017): It adopts an attention-enhanced attribute-to-sequence architec-ture to generate reviews with input attributes.
• TransNets (Catherine and Cohen, 2018): It applies a student-teacher like architecture for review generation by representing the reviews of a user and an item into a text-related representation, which is regularized to be similar to the actual review's latent representation at training time.
• ExpansionNet (Ni and McAuley, 2018): It uses an encoder-decoder framework to generate personalized reviews by incorporating short phrases (e.g., review summaries, product titles) provided as input and introducing aspect-level information (e.g., aspect words).
• SeqGAN (Yu et al., 2017): It regards the generative model as a stochastic parameterized policy and uses Monte Carlo search to approximate the state-action value. The discriminator is a binary classifier to evaluate the sequence and guide the learning of the generative model.
• LeakGAN : The generator is built upon a hierarchical reinforcement learning architecture, which consists of a high-level module and a low-level module, and the discriminator is a CNN-based feature extractor. The advantage is that this model can generate high-quality long text by introducing the leaked mechanism.
Among these baselines, gC2S, Attr2Seq and TransNets are context-aware generation models in different implementation approaches, Expan-sionNet introduces external information such as aspect words, and SeqGAN and LeakGAN are GAN based text generation models. Original Seq-GAN and LeakGAN are designed for general sequence generation without considering context information (e.g., user, item, rating). The learned aspect keywords are provided as input for both ExpansionNet and our model. All the methods have several parameters to tune. We employ validation set to optimize the parameters in each method. To reproduce the results of our model, we report the parameter setting used throughout the experiments in Table 2. Our code is available at https://github.com/turboLJY/ Coarse-to-Fine-Review-Generation.
Evaluation Metrics. To evaluate the performance of different methods on automatic review generation, we adopt six evaluation metrics, including Perplexity, BLEU-1/BLEU-4, ROUGE-1/ROUGE-2/ROUGE-L. Perplexity 3 is the standard measure for evaluating language models; Datasets
BLEU (Papineni et al., 2002) measures the ratios of the co-occurrences of n-grams between the generated and real reviews; ROUGE (Lin, 2004) measures the review quality by counting the overlapping n-grams between the generated and real reviews.

Results and Analysis
In this subsection, we construct a series of experiments on the effectiveness of the proposed model for the review generation task.
Main Results. Table 3 presents the performance of different methods on automatic review generation. We can make the following observations. First, among the three context-based baselines, gC2S and Attr2Seq perform better than TransNets. The two models have similar network architectures, which are simpler than TransNets. We find they are easier to obtain a stable performance on large datasets. Second, GAN-based methods work better than the above baselines, especially Leak-GAN. LeakGAN is specially designed for generating long text, and we adapt it to our task by incorporating context information. Third, Expansion-Net performs best among all the baseline models. A major reason is that it incorporates external knowledge such as review summaries, product titles and aspect keywords. Finally, our model outperforms all the baselines with a large margin. These baseline methods perform the generation in  Table 4. Ablation analysis on AMAZON dataset. a single stage. As a comparison, we use a multistage process to gradually generate long and informative reviews in a coarse-to-fine way. Our model is able to better utilize aspect semantics and syntactic sketch, which is the key of the performance improvement over baselines. Overall, the three datasets show the similar findings. In what follows, we will report the results on AMAZON data due to space limit. We select the best two baselines ExpansionNet and LeakGAN as reference methods.
Ablation Analysis. The major novelty of our model is that it incorporates two specific modules to generate aspects and sketches respectively. To examine the contribution of the two modules, we compare our model with its two variants by removing either of the two modules. We present the BLEU-1 and ROUGE-1 results of our model and its two variants in Table 4. As we can see, both components are useful to improve the final performance, and the sketch generation module seems more important in our task. In our model, the aspect generation module is used to cover aspect semantics and generate informative review; the sketch generation module is able to utilize syntactic templates to improve the generation fluency, especially for long sentences. Current experiments evaluate the usefulness of the two modules based on the overall generation quality. Next, we verify their functions using two specific experiments, namely aspect coverage and fluency evaluation.
Aspect Coverage Evaluation. A generated review is informative if it can effectively capture the semantic information of the real review. Following (Ni and McAuley, 2018), we examine the aspect coverage of different models. Recall that we have used topic models to tag each sentence with an aspect label (or ID). We analyze the average number of aspects in real and generated reviews, and compute on average how many aspects in real reviews are covered in generated reviews. We consider a review as covering an aspect if any of the top 50 words of an aspect exists in the review 4 . In Table 5, we first see an interesting observation that LeakGAN is able to generate more aspects but yield fewer real aspects. As a comparison, Expan-sionNet and our model perform better than Leak-GAN by covering more real aspects, since the two models use the aspect information to instruct the review generation. Our model is better than Ex-pansionNet by characterizing the aspect transition sequences. These results indicate the usefulness of the aspect generation module in capturing more semantic information related to a review.
Fluency Evaluation. We continue to evaluate the usefulness of the sketch generation module in improving the fluency of the generated text. Following (Xu et al., 2018a), we construct the fluency evaluation to examine how likely the generated text is produced by human. We randomly choose 200 samples from test set. A sample contains the input contexts (i.e., user, item, rating), and the texts generated by different models. It is difficult to develop automatic evaluation methods for accurate fluency evaluation. Here, we invite two human annotators (excluding the authors of this paper) who have good knowledge in the domain of electronic reviews to assign scores to the generated reviews. They are required to assign a score to a generated (or real) review according to a 5point Likert scale 5 on fluency. In the 5-point Lik-  ert scale, 5-point means "very satisfying", while 1-point means "very terrible". We further average the two annotated scores over the 200 inputs. The results are shown in Table 6. We can see that our model achieves the highest fluency score among the automatic methods. By using sketches, our model is able to leverage the learned syntactic patterns from available reviews. The Cohen's kappa coefficients are above 0.7, indicating a high correlation and agreement between the two human annotators.

Qualitative Analysis
In this part, we perform the qualitative analysis on the quality of the generated reviews. We present three sample reviews generated by our model in Table 7. As we can see, our model has covered most of the major aspects (with many overlapping aspect keywords) of the real reviews. Although some generated sentences do not follow the exact syntactic structures of real reviews, they are very readable to users. Our model is able to generate aspect-aware sketches, which are very helpful to instruct the generation of the word content.
With the aspect and sketch generation modules, our model is able to produce informative reviews consisting of multiple well-structured sentences. Another interesting observation is that the polarities of the generated text also correspond to their real rating scores, since the rating score has been modeled in the context encoder.

Conclusion
This paper presented a novel review generation model using an aspect-aware coarse-to-fine generation process. Unlike previous methods, our model decomposed the generation process into three stages focusing on different goals. We constructed extensive experiments on three real-world review datasets. The results have demonstrated Gold Standard Generated Sketch Generated Review the shipping was quick and easy service very good product at a reasonable price price 5mm male to 2 rca stereo audio cable sound highly recommend this product to anyone overall this cable worked_perfectly for my NNS sound the price was very JJ and i would_purchase NN from this NNprice it VBD on_time and in good NN service i would_recommend it overall this cable worked perfectly for my needs sound the price was very reasonable and i would purchase another from this vendorprice it arrived on time and in good conditionservice i would recommend it overall oxtail was good other than the flavors were very bland food place is small so if the tables are full be prepared to wait place pay too much for what you get price i will not be back to this locationoverall i had the NN NN and it was very JJ food the staff was JJ but service was a little JJservice i had a bad_experience at this NN place i VBP not JJ if i will be back RBoverall i had the falafel wrap and it was very bland food the staff was friendly but service was a little slow service i had a bad_experience at this place place i am not sure if i will be back again overall the aroma is insanely sour from bad hops aroma dark clear ruby red beat sugar flavor and strong alcohol in aftertaste flavor golden body with a small white head body dont waste your money on this overall VBZ an amber_body with a JJ NN head body the flavor is very JJ with notes of NN flavor this beer has the JJS aroma of canned_corn i have ever VBNaroma pours an amber body with a white finger head body the flavor is very horrible with notes of alcohol flavor this beer has the worst aroma of canned corn i have ever smelledaroma Table 7. Samples of the generated reviews by our model. The three reviews with rating scores of 5 (positive), 3 (neutral), and 1 (negative) are from AMAZON, YELP and RATEBEER datasets, respectively. For privacy, we omit the UIDs and PIDs. For ease of reading, colored aspect labels are manually created corresponding to the predicted aspect IDs by our model. We have underlined important overlapping terms between real and generated reviews.
the effectiveness of our model in terms of overall generation quality, aspect coverage, and fluency. As future work, we will consider integrating more kinds of syntactic features from linguistic analysis such as dependency parsing.