Inter-Annotator Agreement in Sentiment Analysis: Machine Learning Perspective

Victoria Bobicev, Marina Sokolova


Abstract
Manual text annotation is an essential part of Big Text analytics. Although annotators work with limited parts of data sets, their results are extrapolated by automated text classification and affect the final classification results. Reliability of annotations and adequacy of assigned labels are especially important in the case of sentiment annotations. In the current study we examine inter-annotator agreement in multi-class, multi-label sentiment annotation of messages. We used several annotation agreement measures, as well as statistical analysis and Machine Learning to assess the resulting annotations.
Anthology ID:
R17-1015
Volume:
Proceedings of the International Conference Recent Advances in Natural Language Processing, RANLP 2017
Month:
September
Year:
2017
Address:
Varna, Bulgaria
Editors:
Ruslan Mitkov, Galia Angelova
Venue:
RANLP
SIG:
Publisher:
INCOMA Ltd.
Note:
Pages:
97–102
Language:
URL:
https://doi.org/10.26615/978-954-452-049-6_015
DOI:
10.26615/978-954-452-049-6_015
Bibkey:
Cite (ACL):
Victoria Bobicev and Marina Sokolova. 2017. Inter-Annotator Agreement in Sentiment Analysis: Machine Learning Perspective. In Proceedings of the International Conference Recent Advances in Natural Language Processing, RANLP 2017, pages 97–102, Varna, Bulgaria. INCOMA Ltd..
Cite (Informal):
Inter-Annotator Agreement in Sentiment Analysis: Machine Learning Perspective (Bobicev & Sokolova, RANLP 2017)
Copy Citation:
PDF:
https://doi.org/10.26615/978-954-452-049-6_015