Neural Sarcasm Detection using Conversation Context

Nikhil Jaiswal


Abstract
Social media platforms and discussion forums such as Reddit, Twitter, etc. are filled with figurative languages. Sarcasm is one such category of figurative language whose presence in a conversation makes language understanding a challenging task. In this paper, we present a deep neural architecture for sarcasm detection. We investigate various pre-trained language representation models (PLRMs) like BERT, RoBERTa, etc. and fine-tune it on the Twitter dataset. We experiment with a variety of PLRMs either on the twitter utterance in isolation or utilizing the contextual information along with the utterance. Our findings indicate that by taking into consideration the previous three most recent utterances, the model is more accurately able to classify a conversation as being sarcastic or not. Our best performing ensemble model achieves an overall F1 score of 0.790, which ranks us second on the leaderboard of the Sarcasm Shared Task 2020.
Anthology ID:
2020.figlang-1.11
Volume:
Proceedings of the Second Workshop on Figurative Language Processing
Month:
July
Year:
2020
Address:
Online
Editors:
Beata Beigman Klebanov, Ekaterina Shutova, Patricia Lichtenstein, Smaranda Muresan, Chee Wee, Anna Feldman, Debanjan Ghosh
Venue:
Fig-Lang
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
77–82
Language:
URL:
https://aclanthology.org/2020.figlang-1.11
DOI:
10.18653/v1/2020.figlang-1.11
Bibkey:
Cite (ACL):
Nikhil Jaiswal. 2020. Neural Sarcasm Detection using Conversation Context. In Proceedings of the Second Workshop on Figurative Language Processing, pages 77–82, Online. Association for Computational Linguistics.
Cite (Informal):
Neural Sarcasm Detection using Conversation Context (Jaiswal, Fig-Lang 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.figlang-1.11.pdf
Video:
 http://slideslive.com/38929701