A Transformer Approach to Contextual Sarcasm Detection in Twitter

Hunter Gregory, Steven Li, Pouya Mohammadi, Natalie Tarn, Rachel Draelos, Cynthia Rudin


Abstract
Understanding tone in Twitter posts will be increasingly important as more and more communication moves online. One of the most difficult, yet important tones to detect is sarcasm. In the past, LSTM and transformer architecture models have been used to tackle this problem. We attempt to expand upon this research, implementing LSTM, GRU, and transformer models, and exploring new methods to classify sarcasm in Twitter posts. Among these, the most successful were transformer models, most notably BERT. While we attempted a few other models described in this paper, our most successful model was an ensemble of transformer models including BERT, RoBERTa, XLNet, RoBERTa-large, and ALBERT. This research was performed in conjunction with the sarcasm detection shared task section in the Second Workshop on Figurative Language Processing, co-located with ACL 2020.
Anthology ID:
2020.figlang-1.37
Volume:
Proceedings of the Second Workshop on Figurative Language Processing
Month:
July
Year:
2020
Address:
Online
Editors:
Beata Beigman Klebanov, Ekaterina Shutova, Patricia Lichtenstein, Smaranda Muresan, Chee Wee, Anna Feldman, Debanjan Ghosh
Venue:
Fig-Lang
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
270–275
Language:
URL:
https://aclanthology.org/2020.figlang-1.37
DOI:
10.18653/v1/2020.figlang-1.37
Bibkey:
Cite (ACL):
Hunter Gregory, Steven Li, Pouya Mohammadi, Natalie Tarn, Rachel Draelos, and Cynthia Rudin. 2020. A Transformer Approach to Contextual Sarcasm Detection in Twitter. In Proceedings of the Second Workshop on Figurative Language Processing, pages 270–275, Online. Association for Computational Linguistics.
Cite (Informal):
A Transformer Approach to Contextual Sarcasm Detection in Twitter (Gregory et al., Fig-Lang 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.figlang-1.37.pdf
Video:
 http://slideslive.com/38929706