Stop PropagHate at SemEval-2019 Tasks 5 and 6: Are abusive language classification results reproducible?

Paula Fortuna, Juan Soler-Company, Sérgio Nunes


Abstract
This paper summarizes the participation of Stop PropagHate team at SemEval 2019. Our approach is based on replicating one of the most relevant works on the literature, using word embeddings and LSTM. After circumventing some of the problems of the original code, we found poor results when applying it to the HatEval contest (F1=0.45). We think this is due mainly to inconsistencies in the data of this contest. Finally, for the OffensEval the classifier performed well (F1=0.74), proving to have a better performance for offense detection than for hate speech.
Anthology ID:
S19-2131
Volume:
Proceedings of the 13th International Workshop on Semantic Evaluation
Month:
June
Year:
2019
Address:
Minneapolis, Minnesota, USA
Editors:
Jonathan May, Ekaterina Shutova, Aurelie Herbelot, Xiaodan Zhu, Marianna Apidianaki, Saif M. Mohammad
Venue:
SemEval
SIG:
SIGLEX
Publisher:
Association for Computational Linguistics
Note:
Pages:
745–752
Language:
URL:
https://aclanthology.org/S19-2131
DOI:
10.18653/v1/S19-2131
Bibkey:
Cite (ACL):
Paula Fortuna, Juan Soler-Company, and Sérgio Nunes. 2019. Stop PropagHate at SemEval-2019 Tasks 5 and 6: Are abusive language classification results reproducible?. In Proceedings of the 13th International Workshop on Semantic Evaluation, pages 745–752, Minneapolis, Minnesota, USA. Association for Computational Linguistics.
Cite (Informal):
Stop PropagHate at SemEval-2019 Tasks 5 and 6: Are abusive language classification results reproducible? (Fortuna et al., SemEval 2019)
Copy Citation:
PDF:
https://aclanthology.org/S19-2131.pdf
Code
 paulafortuna/SemEval_2019_public