Towards Multimodal Sarcasm Detection (An _Obviously_ Perfect Paper)

Sarcasm is often expressed through several verbal and non-verbal cues, e.g., a change of tone, overemphasis in a word, a drawn-out syllable, or a straight looking face. Most of the recent work in sarcasm detection has been carried out on textual data. In this paper, we argue that incorporating multimodal cues can improve the automatic classification of sarcasm. As a first step towards enabling the development of multimodal approaches for sarcasm detection, we propose a new sarcasm dataset, Multimodal Sarcasm Detection Dataset (MUStARD), compiled from popular TV shows. MUStARD consists of audiovisual utterances annotated with sarcasm labels. Each utterance is accompanied by its context of historical utterances in the dialogue, which provides additional information on the scenario where the utterance occurs. Our initial results show that the use of multimodal information can reduce the relative error rate of sarcasm detection by up to 12.9% in F-score when compared to the use of individual modalities. The full dataset is publicly available for use at https://github.com/soujanyaporia/MUStARD.


Introduction
Sarcasm plays an important role in daily conversations by allowing individuals to express their intent to mock or display contempt. It is achieved by using irony that reflects a negative connotation. For example, in the utterance: Maybe it's a good thing we came here. It's like a lesson in what not to do, the sarcasm is explicit as the speaker expresses learning of a lesson in a positive light when in reality, she means it in a negative way. However, there are also scenarios where sarcasm lacks explicit linguistic markers, thus requiring additional cues that can reveal the speaker's intentions. For instance, sarcasm can be expressed using a combination of verbal and non-verbal cues, such as a change of tone, overemphasis in a word, a drawn-out syllable, or a straight looking face. Moreover, sarcasm detection involves finding linguistic or contextual incongruity, which in turn requires further information, either from multiple modalities (Schifanella et al., 2016;Mishra et al., 2016a) or from the context history in a dialogue.
This paper explores the role of multimodality and conversational context in sarcasm detection and introduces a new resource to further enable research in this area. More specifically, our paper makes the following contributions: (1) We curate a new dataset, MUStARD, for multimodal sarcasm research with high-quality annotations, including both mutlimodal and conversational context features; (2) We exemplify various scenarios where incongruity in sarcasm is evident across different modalities, thus stressing the role of multimodal approaches to solve this problem; (3) We introduce several baselines and show that multimodal models are significantly more effective when compared to their unimodal variants; and (4) We also provide preceding turns in the dialogue which act as context information. Consequently, we surmise that this property of MUStARD leads to a new sub-task for future work: sarcasm detection in conversational context.
The rest of the paper is organized as follows. Section 2 summarizes previous work on sarcasm detection using both unimodal and multimodal sources. Section 3 describes the dataset collection, the annotation process, and the types of sarcastic situations covered by our dataset. Section 4 explains how we extract features for the different modalities. Section 5 shows the experimental work around the new dataset while Section 6 analyzes it. Finally, Section 7 offers conclusions and discusses open Rachel: No, we took her to lunch.

Target Utterance Frames
Chandler: Ah! Your own brand of vigilante justice.

Sarcastic Utterance
Audiovisual Text ... Figure 1: Sample sarcastic utterance in the dataset along with its context and transcript.
problems related to this resource.

Related Work
Automated sarcasm detection has gained increased interest in recent years. It is a widely studied linguistic device whose significance is seen in sentiment analysis and human-machine interaction research. Various research projects have approached this problem through different modalities, such as text, speech, and visual data streams.
Sarcasm in Text: Traditional approaches for detecting sarcasm in text have considered rule-based techniques (Veale and Hao, 2010), lexical and pragmatic features (Carvalho et al., 2009), stylistic features (Davidov et al., 2010), situational disparity (Riloff et al., 2013), incongruity (Joshi et al., 2015), or user-provided annotations such as hashtags (Liebrecht et al., 2013). Resources in this domain are collected using Twitter as a primary data source and are annotated using two main strategies: manual annotation (Riloff et al., 2013;Joshi et al., 2016a) and distant supervision through hashtags (Davidov et al., 2010;Abercrombie and Hovy, 2016). Other research leverages context to acquire shared knowledge between the speaker and the audience (Wallace et al., 2014;Bamman and Smith, 2015). A variety of contextual features have been explored, including speaker's background and behavior in online platforms (Rajadesingan et al., 2015), embeddings of expressed sentiment and speaker's personality traits (Poria et al., 2016), learning of user-specific representations (Wallace et al., 2016;Kolchinski andPotts, 2018), user-community features (Wallace et al., 2015), as well as stylistic and discourse features . In our dataset, we capitalize on the conversational format and provide context by including preceding utterances along with speaker identities. To the best of our knowledge, there is no prior work which deals with the task of sarcasm detection in conversation.
Sarcasm in Speech: Sarcasm detection in speech has mainly focused on the identification of prosodic cues in the form of acoustic patterns that are related to sarcastic behavior. Studied features include mean amplitude, amplitude range, speech rate, harmonics-to-noise ratio, and others (Cheang and Pell, 2008). Rockwell (2000) presented one of the initial approaches to this problem that studied the vocal tonalities of sarcastic speech. They found slower speaking rates and greater intensity as probable markers for sarcasm. Later, Tepperman et al. (2006) studied prosodic and spectral features of sound -both in and out of context -to determine sarcasm. In general, prosodic features such as intonation and stress are considered important indicators of sarcasm (Bryant, 2010;Woodland and Voyer, 2011). We take motivation from this previous research and include similar speech parameters as features in our dataset and baseline experiments.
Multimodal Sarcasm: Contextual information for sarcasm in text can be included from other modalities. These modalities help in providing additional cues in the form of both common or contrasting patterns. Prior work mainly considers multimodal learning for the readers' ability to perceive sarcasm. Such research couples textual features with cognitive features such as the gazebehavior of readers (Mishra et al., 2016a(Mishra et al., ,b, 2017 or electro/magneto-encephalographic (EEG/MEG) signals (Filik et al., 2014;Thompson et al., 2016). In contrast, there is limited work exploring multimodal avenues to understand sarcasm conveyed by the opinion holder. Attardo et al. (2003) presented one of the preliminary explorations on this topic where different phonological and visual markers for sarcasm were studied. However, this work did not analyze the interplay of the modalities. More recently, Schifanella et al. (2016) presented a multimodal approach for this task by considering vi-sual content accompanying text in online sarcastic posts. They extracted semantic visual features from images using pre-trained networks and fused them with textual features. In our work, we extend these notions and propose to analyze video-based sarcasm in dialogues. To the best of our knowledge, ours is the first work to propose a resource on video-level sarcasm. Joshi et al. (2016b) proposed a dataset similar to us, i.e., based on the TV show Friends. However, their corpus only includes the textual modality and is thus not multimodal in nature. Furthermore, we also analyze multiple challenges in sarcasm that call for multimodal learning and provide an evaluation setup for future works to test upon.

Dataset
To enable the exploration of multimodal sarcasm detection, we introduce a new dataset (MUStARD) consisting of short videos manually annotated for their sarcasm property.

Data Collection
To collect potentially sarcastic examples, we conduct web searches on differences sources, mainly YouTube. We use keywords such as Friends sarcasm, Chandler sarcasm, Sarcasm 101, Sarcasm in TV shows. Using this strategy, we obtain videos from three main TV shows: Friends, The Golden Girls, and Sarcasmaholics Anonymous. Note that during this initial search, we focus exclusively on sarcastic content. To obtain non-sarcastic videos, we select a subset of 400 videos from MELD, a multimodal emotion recognition dataset derived from the Friends TV series, originally collected by . In addition, we collect videos from The Big Bang Theory, a TV show whose characters are often perceived as sarcastic. We obtain videos from seasons 1-8 and segment episodes using laughter cues from its audience. Specifically, we use open-source software for laughter detection (Ryokai et al., 2018) to obtain initial segmentation boundaries and fine-tune them using the subtitles' timestamps.
The collected set consists of 6,421 videos. Note that although some of the videos in our initial pool include information about their sarcastic nature, the majority of our videos are not labeled. Thus, we conduct a manual annotation as described next.

Annotation Process
We built a web-based annotation interface that shows each video along with its transcript and requests annotations for sarcasm. We also ask the annotators to flag misaligned videos, i.e., cases where the audio or video is not properly synchronized. The interface allows the annotators to watch a context video consisting of the previous video utterances, whenever they deem it necessary. Given the large number of videos to be annotated, we request annotations in batches of four videos at a time. Our web interface is shown in Fig. 2.
We conduct the annotation in two steps. First, we annotate the videos from The Big Bang Theory, as it contains the largest set of videos. Second, we annotate the remaining videos, belonging to the other sources. The annotation is conducted by two graduate students who have first been provided with easy examples of explicit sarcastic content, to illustrate sarcasm in videos. Each annotator labeled the full set of videos independently.
For the first step, after annotating the first partconsisting of 5,884 utterances from The Big Bang Theory -we noticed that the majority of them were labeled as non-sarcastic (98% were considered as non-sarcastic by both). In addition, our initial inter-annotation agreement was low (Kappa score is 0.1463). We thus decided to stop the annotation process and reconcile the annotation differences before proceeding further. The annotators discussed their disagreements for a subset of 20 videos, and then re-annotated the videos. This time, we obtained an improved inter-annotator agreement of 0.2326. The annotation disagreements were reconciled by a third annotator by identifying the disagreement cases, watching the videos again and deciding which is the correct label for each one.
Next, we annotate the second part, consisting of 624 videos drawn from Friends, The Golden Girls, and Sarcasmaholics Anonymous. As before, the two annotators label each video independently. The inter-annotator agreement was calculated with a Kappa score of 0.5877. Again, the differences were reconciled by a third annotator.
The resulting set of annotations consists of 345 videos labeled as sarcastic and 6,020 videos labeled as non-sarcastic for a total of 6,365 videos.

Transcriptions
Since we collect videos from several sources, some of them had subtitles or transcripts readily available. This is particularly the case for videos from Big Bang Theory and MELD. We use the MELD transcriptions directly. For Big Bang Theory, we extracted the transcript by applying manual substring matching on the episode subtitles. The remaining videos are manually transcribed.

Sarcasm Dataset: MUStARD
To enable our experiments, which focus explicitly on the multimodal aspects of sarcasm, we decided to work with a balanced sample of sarcastic and non-sarcastic videos. We thus obtain a balanced sample from the set of 6,365 annotated videos. We start by selecting all videos marked as sarcastic from the full set, and then we randomly obtain an equally sized non-sarcastic sample from the nonsarcastic subset by prioritizing the ones annotated by a larger number of annotators. Our dataset thus comprises 690 videos with an even number of sarcastic and non-sarcastic labels. Source, character, and label-ratio statistics are shown in Figs. 3 and 4.
In the remainder of this paper, we use the term utterance while referring to the videos in our dataset. We extend the definition of an utterance 2 to include consecutive multi-sentence dialogues of the same speaker to prioritize completeness of information. As a result, 61.3% of the utterances from the dataset are single sentences, while the remaining utterances consist of two or more sentences. Each utterance in our dataset is coupled with its context utterances, which are preceding turns by the speakers participating in the dialogue. Some 2 An utterance is usually defined as a unit of speech bounded by breaths or pauses. of the context videos contain multi-party dialogue between speakers participating in the scene. The number of turns in the context is manually set to include a coherent background of the target utterance. Table 1 shows general statistics for the utterances in our dataset. Each utterance and its context consists of three modalities: video, audio, and transcription (text). Also, all the utterances are accompanied by their speaker identifiers. Fig. 1 illustrates a sarcastic utterance along with its associated context in the dataset. Fig. 4b provides the list of major characters present in the dataset. Fig. 4a details the distribution of labels per character. Some of the characters, such as Chandler and Sheldon, occupy major portions of the dataset. This is expected since they play comic roles in the shows. To avoid speaker bias of such popular characters, we also include non-sarcastic samples for these characters. In contrast, the dataset intentionally includes minor roles such as Dorothy from The Golden Girls, who is entirely sarcastic throughout the corpus. This allows the study of speaker bias for sarcasm detection.

Qualitative Aspects
Sarcasm detection in text often requires additional information that can be leveraged from associated modalities. Below, we analyze some cases that require multimodal reasoning. We exemplify using instances from our proposed dataset to further support our claim of sarcasm being often expressed in a multimodal way.
Role of Multimodality: Fig. 5 presents two cases where sarcasm is expressed through the in-

Sarcasmaholics 2%
(a) Source across the dataset.    Sheldon : Its just a privilege to watch your mind at work.
congruity between modalities. In the first case, the language modality indicates fear or anger, whereas the facial modality lacks any visible sign of anxiety that would corroborate the textual modality. In the second case, the text is indicative of a compliment, but the vocal tonality and facial expressions show indifference. In both cases, there exists incongruity between modalities, which acts as a strong indicator of sarcasm. Multimodal information is also important in providing additional cues for sarcasm. For example, the vocal tonality of the speaker often indicates sarcasm. Text that otherwise looks seemingly straightforward is noticed to contain sarcasm only when the associated voices are heard. Sarcastic tonalities can range from self-deprecatory or broody tone to something obnoxious and raging. Such extremities are often seen while expressing sarcasm. Another marker of sarcasm is the undue stress on particular words. For instance, in the phrase You did "really" well, if the speaker stresses the word really, then the sarcasm is evident. Fig. 6 provides sarcastic cases from the dataset where such vocal stresses exist.
It is important to note that sarcasm does not nec- essarily imply conflicting modalities. Rather, the availability of complementary information through multiple modalities improves the capacity of models to learn discriminative patterns responsible for this complex process.
Role of Context: In Fig. 7, we present two instances from the dataset where the role of conversational context is essential in determining the sarcastic nature of an utterance. In the first case, the sarcastic reference of the sun is apparent only when the topic of discussion is known, i.e., tanning. In the second case, the reference made by the speaker regarding a venus flytrap can be recognized as sarcastic only when it is known to be referred as a thing to go on a date with. These examples demonstrate the importance of having contextual information. The availability of context in our proposed dataset provides models with the ability to utilize additional information while reasoning about sarcasm. Enhanced techniques would require commonsense reasoning to understand illogical statements (such as going on a date with a venus flytrap), which indicate the presence of sarcasm. Blanche : With a man?

Multimodal Feature Extraction
We obtain several learning features from the three modalities included in our dataset. The process followed to extract each of them is described below: Text Features: We represent the textual utterances in the dataset using BERT (Devlin et al., 2018), which provides a sentence representation u t ∈ R dt for every utterance u. In particular, we average the last four transformer layers of the first token ([CLS]) in the utterance -using the BERT-Base model -to get a unique utterance representation of size d t = 768. We also considered averaging Common Crawl pre-trained 300 dimensional GloVe word vectors (Pennington et al., 2014) for each token; however, it resulted in lower performance as compared to BERT-based features.
Speech Features: To leverage information from the audio modality, we obtain low-level features from the audio data stream for each utterance in the dataset. Through these features, we intend to provide information related to pitch, intonation, and other tonal-specific details of the speaker (Tepperman et al., 2006). We utilize the popular speechprocessing library Librosa (McFee et al., 2018) and perform the processing pipeline described next. First, we load the audio sample for an utterance as a time series signal with a sampling rate of 22050 Hz. Then we remove background noise from the signal by applying a heuristic vocal-extraction method. 3 Finally, we segment the audio signal into d w nonoverlapping windows to extract local features that include MFCC, melspectogram, spectral centroid and their associated temporal derivatives (delta). Segmentation is done to achieve a fixed length representation of the audio sources which are otherwise variable in length across the dataset. All the extracted features are concatenated together to compose a d a = 283 dimensional joint representation {u a i } dw i=1 for each window. The final audio representation of each utterance is obtained by calculating the mean across the window segments, i.e.
Video Features: We extract visual features for each of the f frames in the utterance video using a pool5 layer of an ImageNet (Deng et al., 2009) pretrained ResNet-152 (He et al., 2016 image classification model. We first preprocess every frame by resizing, center-cropping and normalizing it. To obtain a visual representation of each utterance, we compute the mean of the obtained d v = 2048 dimensional feature vector u v i for every frame: While we could use more advanced visual encoding techniques (e.g., recurrent neural network encoding techniques), we decide to use the same averaging strategy as with the other modalities.

Experiments
To explore the role of multimodality in sarcasm detection, we conduct multiple experiments evaluating each modality separately and also combinations of modalities provided in the dataset. Additionally, we investigate the role of context and speaker information for improving predictions.

Experimental Setup
We perform two main sets of evaluations. The first set involves conducting five-fold cross-validation experiments where the folds are randomly created in a stratified manner. This is done to ensure label balance across folds. In each of the K iterations, the k th fold acts as a testing set while the remaining are used for training. Validation folds can be obtained from a part of the training folds. As the folds are created in a randomized manner, there is overlap between speakers across training and testing sets, thus resulting in a speaker-dependent setup. The second set of evaluations restrict the inclusion of utterances from the same speaker to be either in the training or testing sets. Utterances from The Big Bang Theory, The Golden Girls and Sarcasmaholics Anonymous are made part of the training set while Friends is used as a testing set. 4 We call this the speaker-independent setup. Motivation for such a setup is discussed in Section 6.
During our experiments, we use precision, recall, and F-score as the main evaluation metrics, weighted across both sarcastic and non-sarcastic classes. The weights are obtained based on the class ratios. For speaker-dependent scenraio, we report results by averaging across the five-fold crossvalidation results.

Baselines
The experiments are conducted using three main baseline methods: Majority: This baseline assigns all the instances to the majority class, i.e., non-sarcastic.
Random: This baseline makes random/chance predictions sampled uniformly across the test set.

SVM:
We use Support Vector Machines (SVM) as the primary baseline for our experiments. SVMs are strong predictors for small-sized datasets and at times outperform neural counterparts (Byvatov et al., 2003). We use the SVM classifiers from scikit-learn (Pedregosa et al., 2011) with an RBF kernel and a scaled gamma. The penalty term C is kept as a hyper-parameter which we tune based on each experiment (we choose between 1, 10, 30, 500, and 1000). For the speaker dependent setup we scale the features by subtracting the mean and dividing them by the standard deviation. Multiple modalities are combined using early fusion, where the features drawn from the different modalities are concatenated together.   The lowest performance is obtained with the Majority baseline which achieves 33.3% weighted Fscore (66.7% F-score for non-sarcastic class and 0% for sarcastic). The pre-trained features for the visual modality provide the best performance among the unimodal variants. The addition of textual features through concatenation improves the unimodal baseline and achieves the best performance. The tri-modal variant is unable to achieve the best score due to a slightly sub-optimal performance from the audio modality. Overall, the combination of visual and textual signals significantly improves over the unimodal variants, with a relative error rate reduction of up to 12.9%. We manually investigate the utterances where the bimodal textual and visual model predicts sarcasm correctly while the unimodal textual model fails. In most of these samples, the textual component does not reveal any explicit sarcasm (see Fig. 9). As a result, the utterances require additional cues, which it avails from the multimodal signals.

Multimodal Sarcasm Classification
The speaker-independent setup is more challenging as compared to the speaker-dependent scenario, as it prevents the model from registering speakerspecific patterns. The presence of new speakers in Speaker Utterance

Sheldon
Darn. If you weren't busy, I'd ask you to join us.

I'm sorry, we don't have your sheep.
Chandler I am sorry, it was a one time thing. I was very drunk and it was someone else's subconscious. Figure 9: Sample sarcastic utterances correctly predicted by T+V but not only T model in the speaker-dependent setup. The utterances either do not have explicit sarcastic markers or need commonsense-reasoning to detect ironies, such as being drunk in someone else's subconscious.
the testing set requires a higher degree of generalization from the model. Our setup also segregates at the source level, thus the testing involves an entirely new environment concerning all the modalities. We believe that the speaker-independent setup is a strong test-bed for multimodal sarcasm research. The increased difficulty of this task is also noticed in the model training, which now requires a smaller error margin (or higher C value) of the SVM's decision function to provide good test performance. Table 3 presents the performance of our baselines in the speaker-independent setup. In this case, the multimodal variants do not greatly outperform the unimodal counterparts. Unlike Table 2, the audio channel plays a more important role, and it is slightly improved by adding text. By inspecting the correctly predicted sarcastic examples by text plus audio but not by text, we observe a tendency of higher mean pitch (mean fundamental frequency) with respect to those incorrectly predicted, as Attardo et al. (2003) suggested. Failure cases seem to contain particular patterns of high pitch, also studied by Attardo et al. (2003), but in average they seem to have normal pitch. In this sense, future work can focus on analyzing the temporal localities of the audio channel.
In this setup, video features do not seem to work well. We hypothesize that, because the visual features are about object features (not specific to sarcasm) and the model is shallow, these features may make the model capture character biases which make them unsuitable for the speaker-independent setup. This is also suggested by the statistics in Fig. 10 which we describe in the next section. By looking at the incorrect predictions by the best model, we infer that models should better capture the mismatches between the main speaker facial expressions and the emotions of what is being said.  T=text, A=audio, V=video.

The Role of Context and Speaker Information
We investigate whether additional information, such as an utterance's context (i.e., the preceding utterances, cf. Section 3.5) and the speaker identification, are helpful for the predictions. Context features are generated by averaging the representations of the utterances (as per Section 4) present in the context. For the speakers, we use a one-hot encoding vector with size equal to the total unique speakers in a training fold. Table 4 shows the results for both evaluation settings for the textual baseline and the best multimodal variant. For the context features, we see a slight improvement in the best variant of the speaker independent setup (text plus audio); however, in other models, there is no improvement. A possible reason could be the loss of temporal information when pooling across the conversation.
For the speaker features, we see an improvement in the speaker-dependent setup for the textual modality. Due to the speaker overlap across splits, the model can leverage speaker regularities for sarcastic tendencies. However, we do not observe the same trend for the best multimodal variant (text + video) where the score barely improves. To understand this result, we visualize the correct predictions made by this model. The results, as seen in Fig. 10, show a correlation between the class distributions among the overall ground truth and the correctly predicted instances per speaker. As this model does not use speaker information, this correlation indicates that the multimodal variant is able to learn speaker-specific information transitively through the input features, rendering additional speaker input redundant. Lastly, in the speaker in- dependent setup, the speaker information does not lead to improvement. This is also expected as there is no speaker overlap between the splits.

Conclusion and Future Work
In this paper, we provided a systematic introduction to multimodal learning for sarcasm detection.
To enable research on this topic, we introduced a novel dataset, MUStARD, consisting of sarcastic and non-sarcastic videos drawn from different sources. By showing multiple examples from our curated dataset, we demonstrate the need for multimodal learning for sarcasm detection. Consequently, we developed models that leverage three different modalities, including text, speech, and visual signals. We also experimented with the integration of context and speaker information as additional input for our models.
The results of the baseline experiments supported the hypothesis that multimodality is important for sarcasm detection. In multiple evaluations, the multimodal variants were shown to significantly outperform their unimodal counterparts, with relative error rate reductions of up to 12.9%.
Moreover, while conducting this research, we identified several challenges that we believe are important to address in future research work on multimodal sarcasm detection.
Multimodal fusion: So far, we have only explored early fusion for multimodal classification. Future work could investigate advanced spatiotemporal fusion strategies (e.g., Tensor-Fusion (Zadeh et al., 2017), CCA (Hotelling, 1936)) to better encode the correspondence between modalities. Another direction could be to create fusion strategies that can better model incongruity among modalities to identify sarcasm.
Multiparty conversation: The dialogues represented in our dataset are often multi-party conversations. Advanced techniques to learn multimodal relationships could incorporate better relationship modeling (Majumder et al., 2018), and exploit models that provide gesture, facial and pose information about the people in the scene (Cao et al., 2018).
Neural baselines: As we strove to create a highquality dataset with rich annotations, we had to trade-off corpus size. Moreover, the occurrence of sarcastic utterances itself is scanty. To focus on effects induced by multimodal experiments, we chose a balanced version of the dataset with a limited size. This, however, arises the problem of over-fitting in complex neural models. As a consequence, in our initial experiments, we noticed that SVM classifiers perform better than their neural counterparts, such as CNNs. Future work should try to overcome this issue with solutions involving pre-training, transfer learning, domain adaption, or low-parameter models.
Sarcasm detection in conversational context: Our proposed MUStARD is inherently a dialogue level dataset where we aim to classify the last utterance in the dialogue. In a dialogue, to classify an utterance at time t, the preceding utterances at time < t can be considered as its context. In this work, although we utilize conversational context, we ignore modeling various key conversation specific factors such as interlocutors' goals, intents, dependency, etc. (Poria et al., 2019). Considering these factors can improve context modeling necessary for sarcasm detection in conversational context. Future work should try to leverage these factors to improve the baseline scores reported in this paper.
Main speaker localization: We currently extract visual features ubiquitously for each frame. As gesture and facial expressions are important features for sarcasm analysis, we believe the capability for models to identify the speakers in the multiparty videos is likely to be beneficial for the task.
Finally, we believe the resource introduced in this paper has the potential to enable novel research in multimodal sarcasm detection.