LT@Helsinki at SemEval-2020 Task 12: Multilingual or Language-specific BERT?

Marc Pàmies, Emily Öhman, Kaisla Kajava, Jörg Tiedemann


Abstract
This paper presents the different models submitted by the LT@Helsinki team for the SemEval 2020 Shared Task 12. Our team participated in sub-tasks A and C; titled offensive language identification and offense target identification, respectively. In both cases we used the so-called Bidirectional Encoder Representation from Transformer (BERT), a model pre-trained by Google and fine-tuned by us on the OLID and SOLID datasets. The results show that offensive tweet classification is one of several language-based tasks where BERT can achieve state-of-the-art results.
Anthology ID:
2020.semeval-1.205
Volume:
Proceedings of the Fourteenth Workshop on Semantic Evaluation
Month:
December
Year:
2020
Address:
Barcelona (online)
Editors:
Aurelie Herbelot, Xiaodan Zhu, Alexis Palmer, Nathan Schneider, Jonathan May, Ekaterina Shutova
Venue:
SemEval
SIG:
SIGLEX
Publisher:
International Committee for Computational Linguistics
Note:
Pages:
1569–1575
Language:
URL:
https://aclanthology.org/2020.semeval-1.205
DOI:
10.18653/v1/2020.semeval-1.205
Bibkey:
Cite (ACL):
Marc Pàmies, Emily Öhman, Kaisla Kajava, and Jörg Tiedemann. 2020. LT@Helsinki at SemEval-2020 Task 12: Multilingual or Language-specific BERT?. In Proceedings of the Fourteenth Workshop on Semantic Evaluation, pages 1569–1575, Barcelona (online). International Committee for Computational Linguistics.
Cite (Informal):
LT@Helsinki at SemEval-2020 Task 12: Multilingual or Language-specific BERT? (Pàmies et al., SemEval 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.semeval-1.205.pdf
Data
OLID