Lee at SemEval-2020 Task 12: A BERT Model Based on the Maximum Self-ensemble Strategy for Identifying Offensive Language

Junyi Li, Xiaobing Zhou, Zichen Zhang


Abstract
This article describes the system submitted to SemEval 2020 Task 12: OffensEval 2020. This task aims to identify and classify offensive languages in different languages on social media. We only participate in the English part of subtask A, which aims to identify offensive languages in English. To solve this task, we propose a BERT model system based on the transform mechanism, and use the maximum self-ensemble to improve model performance. Our model achieved a macro F1 score of 0.913(ranked 13/82) in subtask A.
Anthology ID:
2020.semeval-1.273
Volume:
Proceedings of the Fourteenth Workshop on Semantic Evaluation
Month:
December
Year:
2020
Address:
Barcelona (online)
Editors:
Aurelie Herbelot, Xiaodan Zhu, Alexis Palmer, Nathan Schneider, Jonathan May, Ekaterina Shutova
Venue:
SemEval
SIG:
SIGLEX
Publisher:
International Committee for Computational Linguistics
Note:
Pages:
2067–2072
Language:
URL:
https://aclanthology.org/2020.semeval-1.273
DOI:
10.18653/v1/2020.semeval-1.273
Bibkey:
Cite (ACL):
Junyi Li, Xiaobing Zhou, and Zichen Zhang. 2020. Lee at SemEval-2020 Task 12: A BERT Model Based on the Maximum Self-ensemble Strategy for Identifying Offensive Language. In Proceedings of the Fourteenth Workshop on Semantic Evaluation, pages 2067–2072, Barcelona (online). International Committee for Computational Linguistics.
Cite (Informal):
Lee at SemEval-2020 Task 12: A BERT Model Based on the Maximum Self-ensemble Strategy for Identifying Offensive Language (Li et al., SemEval 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.semeval-1.273.pdf