Examining the Ordering of Rhetorical Strategies in Persuasive Requests

Interpreting how persuasive language influences audiences has implications across many domains like advertising, argumentation, and propaganda. Persuasion relies on more than a message’s content. Arranging the order of the message itself (i.e., ordering specific rhetorical strategies) also plays an important role. To examine how strategy orderings contribute to persuasiveness, we first utilize a Variational Autoencoder model to disentangle content and rhetorical strategies in textual requests from a large-scale loan request corpus. We then visualize interplay between content and strategy through an attentional LSTM that predicts the success of textual requests. We find that specific (orderings of) strategies interact uniquely with a request’s content to impact success rate, and thus the persuasiveness of a request.


Introduction
Persuasion has been shown as a powerful tool for catalyzing beneficial social and political changes (Hovland et al., 1953) or enforcing propaganda as a tool of warfare (Finch, 2000). Modeling persuasiveness of text has received much recent attention in the language community (Althoff et al., 2014;Tan et al., 2016;Habernal and Gurevych, 2017;Srinivasan et al., 2019). Numerous qualitative studies have been conducted to understand persuasion, from explorations of rhetoric in presidential campaigns (Bartels, 2006;Popkin and Popkin, 1994) to the impact of a communicator's likability on persuasiveness (Chaiken, 1980). Studies of persuasion and argumentation that have analyzed textual level features (e.g., n-grams, independent rhetorical strategies) to gauge efficacy have also garnered recent attention (Althoff et al., 2014;Habernal andGurevych, 2017, 2016b,a;Yang and Kraut, 2017;. Of particular interest is Morio et al. (2019), which identified sentence Strategy Definition Concreteness (39%) Use concrete details in request "I need $250 to purchase fishing rods" Reciprocity (18%) Assure user will repay giver "I will pay 5% interest to you" Impact (12%) Highlight the impact of a request "This loan will help teach students" Credibility (8%) Use credentials to establish trust "I have repaid all of my prior loans" Politeness (16%) Use polite language "Highly appreciated." Other (7%) None of the above placements for individual rhetorical strategies in a request. Other research analyzed how different persuasive strategies are more effective on specific stances and personal backgrounds (Durmus andCardie, 2018, 2019).
However, prior work has mainly focused on identifying overall persuasiveness of textual content or analyzing components of persuasion affecting a request. These works largely ignore ordering of specific strategies, a key canon of rhetoric that has a large impact on persuasion effectiveness (Borchers and Hundley, 2018;Cicero, 1862). In the context of online communities, identifying where/how effective orderings occur may highlight qualities of persuasive requests and help users improve their rhetorical appeal. Furthermore, highlighting ineffective orderings may help users avoid pitfalls when framing their posts.
To fill this gap, we propose to investigate particular orderings of persuasive strategies that affect a request's persuasiveness and identify situations where these orderings are optimal. Specifically, we take a closer look at strategies (Table  1) and their orderings in requests from the subred- Step 1 deconstructs sentences into latent content and strategy vectors, using a semi-supervised VAE; Step 2 combines content and strategy vectors at the sentence level, using sentence level attention; Step 3 uses an LSTM to model our sentences in a request, then combines sentences using request level attention. Finally, Step 4 predicts our binary persuasiveness label using a multilayer perceptron. dit/online lending community r/Borrow 1 ; and utilize them to examine research questions like: When should requesters follow strategy orderings (e.g., ending loan requests with politeness) that rely on social norms? Should requesters worry less about orderings and more about content? Altogether, this work examines orderings, an overlooked rhetorical canon, and how they interact with a request's persuasiveness in an online lending domain. Our contributions include: 1. Identifying specific strategy orderings that correlate with requests' persuasiveness.
2. Highlighting the interplay between content and strategy with respect to the persuasiveness of a request.
3. Perturbing underperforming strategy orderings to help improve persuasiveness of requests via a set of introduced edit operations.
Code for our analyses can be found at https:// github.com/GT-SALT/Persuasive-Orderings.

Dataset
Our Borrow dataset consists of 49,855 different loan requests in English, scraped from the r/Borrow subreddit. r/Borrow is a community which financially assists users with small short-term loans to larger long-term ones. Every request has a binary label indicating if a loan is successful or not. Request success rate, on average, is 48.5%. We randomly sampled a subset (5%) from the whole corpus to annotate their sentence-level labels indicating persuasive strategies; labels were adapted from 1 https://www.reddit.com/r/borrow/  Table 1). We recruited four research assistants to label persuasion strategies for each sentence.
Definitions and examples of different persuasion strategies were provided, together with a training session where we asked annotators to annotate a number of example sentences and walked them through any disagreed annotations. To assess the reliability of the annotated labels, we then asked them to annotate a small subset of 100 requests from our corpus, with a Cohen' Kappa of .623, indicating moderate annotation agreement (McHugh, 2012). Annotators then annotated the rest of corpus by themselves independently. In total, we gathered 900 requests with sentence-level labels and 48,155 requests without sentence-level labels as our training set, 400 requests with sentence-level labels as the validation set and 400 requests with sentence-level labels as the test set.

Modeling
Persuasive sentences are combinations of content (what to include in persuasive text) and strategy (how to be persuasive). To explore interplay between content and strategy orderings inside requests, we followed Kingma and Welling (2014) and , utilizing a semisupervised Variational Autoencoder (VAE) trained on both labeled and unlabeled sentences to disentangle sentences into strategy and content representations. Specifically, for every input sentence x, we assumed the graphical model p(x, z, l) = p(x|z, l)p(z)p(l), where z is a latent "content" variable and l is the persuasive strategy label. The semi-supervised VAE fits an inference network q(z|x, l) to infer latent variable z, a generative network p(x|l, z) to reconstruct input sentence s, and a discriminative network q(l|x) to predict persuasive strategy l, while optimizing an evidence lower bound (ELBO) similar that of general VAE. We report a Macro F-1 score of 0.75 on the test set for sentence-level classification, suggesting reasonable performance compared to an LSTM baseline with a Macro F-1 of 0.74 . Then, for each request M = {x 0 , x 1 , ..., x L } consisting of L sentences that a user posted to receive a loan, we utilized our trained semi-VAE to represent each sentence x i in M with content and strategy vari- With the intent of interpreting importance between strategy orderings and content, we built an attentional LSTM trained to predict success of a request. For each disentangled sentence (z i , l i ) in our requests, we first applied attention on z i and l i at the sentence level, dynamically combining them into sentence representation γ i : where u are randomly initialized context vectors that were jointly learned with weights W . We computed the request representation v through an LSTM that encoded sentence representations γ i for each request, and a request level attention that aggregated information from different sentences. Overall persuasiveness is predicted as: The training objective is regular cross entropy loss. Macro-averaged performances for requestlevel classification on several baseline classifiers are shown in Table 2. Our attentional model (VAE + LSTM) achieves comparable performance to BERT, while providing additional benefit of disentangling content and strategy. This helps yield relative measures of importance for content and strategies.

Interplay of Ordering and Content
To examine how different strategy orderings contribute to overall persuasiveness of requests, we  identified relationships between strategy orderings and success rate by analyzing learned attention weights between strategy orderings and content in our model. Motivated by the "Rule of Three" prevalent in persuasive writing (Clark, 2016), we utilized triplets as our strategy unit of analysis. The most important strategy triplet in each request was considered to be its "persuasion strategy triplet." Pinpointing strategy triplets involved finding the most important consecutive three sentences ((z m−1 , l m−1 ), (z m , l m ), (z m+1 , l m+1 )) in one request based on highest request-level (α d ) attention weight associated with a sentence. The strategies (l m−1 , l m , l m+1 ) associated with these sentences were defined as the aforementioned strategy triples. We noted that the cumulative request-level (α d ) attention placed on strategy triplets had µ = .98 and σ = .07, indicating that a single triplet carried most responsibility for persuasiveness of requests. For our analysis, we also defined success rate of a strategy triplet as the average success rate of the requests it belongs to, irrespective of how important it is to a request (ignoring α d ). To control for infrequent triplets, we defined rare strategies as consisting of less than 0.5% of our dataset. We filtered these rare strategies, along with triplets containing the undefined "Other" strategy. Finally, we averaged sentence-level attention weights α s on each strategy representation in a triplet to represent the importance of an ordering pattern compared to content. Figure 2 plots sentence-level attention weights for each strategy triplet and its corresponding success rate.
We made three discoveries. (1) Success rate and triplet attention were strongly negatively correlated (R = −.90, p < .0001). Therefore, the model paying larger attention to strategy triplets may communicate a request's lack of persuasiveness.
(2) Attention from around strategy (SOS, Im, Re) onward decreased substantially, suggesting that content (complementary to strategy attention) played an increasingly larger role in determining the persuasiveness for strategies triplets above average success rate. (3)    actively decreased request persuasiveness, sabotaging its success; an under-performing strategy ordering pattern with any content often resulted in reduced persuasiveness. On the contrary, simply having an over-performing strategy with respect to the average success rate does not appear to affect a request due to reduced attention. We also manually examined around 300 examples, with representative ones shown in Table 3. Generally, over-performing triplets had little effect on the success rates due to reduced strategy attention. However, under-performing triplets were relatively highly attended to. Below, we explain two general situations that highlight an over-performing and under-performing strategy pattern from a social science perspective: 4 Common Persuasive Patterns 4.1 "Please sir, I want some more." A common pattern among the top 5 strategy triplets is the use of politeness. Oftentimes, the politeness triplet appears at the end of the sentence and is usually paired with some form of reciprocity. From Figure 2, we observed that the best strategy-(Po, Po, EOS)-is a triplet with higher success rates than the average. From a social science perspective, ending a request politely engenders a sense of liking and creates connections between the audience and requester, consistent with prior work showing that politeness is a social norm associated with ending a conversation (Schegloff and Sacks, 1973). An example is shown in the first row in Table 3. However, this strategy alone does not result in a persuasive request as its associated strategy attention is relatively low. Users who end requests politely may be likely to put effort into content, aligning with our success rate observations. Adding to Althoff et al. (2014), we observed that users who exercise social "strategy" norms by closing conversations politely are shifting importance of a request from strategy to content. Thus, content must still be optimal for a request to be persuasive.

"It's My Money & I Need It Now."
On the contrary, if a triplet consists mostly of concreteness, it performs far below average. For instance, triplets like (Co, Co, Co) often came up in examples that were demanding as shown in Table 3. From a social science perspective, emotional appeal in arguments is key to framing aspects of a request and helps soften attention placed on facts (Walton, 1992;Macagno and Walton, 2014;Oraby et al., 2015). In the context of our dataset-a lending platform where concreteness consists mostly of demands-a lack of emotive argumentation may cause an audience to focus on demands themselves, resulting in concrete and emotionless requests.

Editing Operations
Based on the effectiveness of different persuasion patterns we discovered, this section examines improving underperforming persuasion requests by editing the persuasion strategy patterns. Here we define three editing operations: (1) Table 4 summarizes the results.

Editing Results
For Insertion, the underperforming requests did not improve by simply inserting a good ending triplet, partially because the underperforming request already consisted of a sabotaging strategy; furthermore, audiences are likely to generalize impressions from an underperforming strategy to the entire request (Ambady and Rosenthal, 1992).
Deletion of the poor strategy triplets boosted the persuasiveness of a request by mitigating the sabotaging effects of a non-persuasive strategy; however, since the content of the remaining request is mainly unedited, these request still have lower success rates than naturally occurring triplets.
Swapping the underperforming triplets with effective strategy triplets generated similar persuasiveness to deletion, suggesting again that the presence of an overperforming strategy triplet does not improve the persuasiveness of a request (unlike the sabotaging nature of an underperforming triplet); instead, it signals that a given request naturally contains good content since users who put effort into following social norms will likely work hard on the content. This may explain why requests that naturally contain overperforming triplets have higher  success rates than our edited examples. Simply swapping strategies does not improve the content, and thus the persuasiveness, to a similar extent.

Conclusion & Future Work
In this work, we highlight important strategy orderings for request persuasiveness, and surface complex relationships between content and strategy at different request success rates. Finally, we notice improvements in persuasiveness by editing underperforming strategies. For future work, we plan to explore different techniques for explainability other than attention and compare effective strategies beyond the triplet level across different datasets. We also aim to look at the presence of different strategies across multi-modal settings; does introducing a new modal affect how effective/ineffective strategies are expressed? Furthermore, we plan to identify and compare different strategies across domains, as our work is limited to lending platformswe expect that different domains would highlight different strategies.

A Additional Model and Training Details
For all models, hyperparameters were manually tuned using macro-averaged F-1 scores and early stopping as our selection criterion. Models were trained using an NVIDIA RTX 2080 Ti. For our optimizers, unless otherwise specified, we use AdamW with learning rate 1e-3, betas (0.9, 0.999), eps 1e-08, and weight decay 0.01. We used Py-Torch (Paszke et al., 2019) and HuggingFace (Wolf et al., 2019) for any deep learning work.

VAE + LSTM:
We minimize the following objective function for our graphical model: We also use LSTMs for the inference q(z|x, l), generative p(x|l, z), and discriminative q(l|x) networks for our VAE. We set the size of z to be 64. l's size is defined by the number of unique strategies in our dataset: 6. We use the reparameterization trick in Kingma and Welling (2014) to use backprop on z; and use Gumbel's softmax (Jang et al., 2016) to model l continuously. Finally, we use CBOW Word2Vec embeddings (Mikolov et al., 2013) of size 128 to learn initial word embeddings. Our VAE was trained for 100 epochs; and our LSTM was trained for 50 epochs.
BERT Baseline: For our BERT Baseline, we finetune the small BERT Base Cased model, using the AdamW optimizer with learning rate of 2e-5 and Adams epsilon of 1e-8. Our BERT model is imported from HuggingFace's transformers repository, and was finetuned for 10 epochs.
Random Baseline: We use the dummy classifier with the random setting provided in Scikit-Learn   (Pedregosa et al., 2011). Validation performance across all classifiers can be seen in Table 5.

B Persuasion Strategy Triplets
A full ranked list of persuasion strategy triplets, along with average strategy attention and success rates can be found in Table 6.