2nd Workshop on Novel Evaluation Approaches for Text Classification Systems
2nd Workshop on Novel Evaluation Approaches for Text Classification Systems
Co-located with ICWSM 2023, 5 June 2023, Limassol, Cyprus
2nd Workshop on Novel Evaluation Approaches for Text Classification Systems
Co-located with ICWSM 2023, 5 June 2023, Limassol, Cyprus
The 3rd Workshop on Evaluation and Comparison for NLP systems (Eval4NLP, https://eval4nlp.github.io/2022), co-located at the 2022 Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics (AACL 2022, https://www.aacl2022.org/), invites the submission of long and short papers, with a theoretical or experimental nature, describing recent advances in system evaluation and comparison in NLP.
Fair evaluations and comparisons are of fundamental importance to the NLP community to properly track progress, especially within the current deep learning revolution, with new state-of-the-art results reported in ever shorter intervals. This concerns the creation of benchmark datasets that cover typical use cases and blind spots of existing systems, the designing of metrics for evaluating the performance of NLP systems on different dimensions, and the reporting of evaluation results in an unbiased manner.
The automatic or semiautomatic analysis of textual data is a key approach to analyse the massive amounts of user-generated content online, from the identification of sentiment in text and topic classification to the detection of abusive language, misinformation or propaganda. However, the development of such systems faces a crucial challenge.
--------------------------------------------
Second Call for Papers
--------------------------------------------
The 2nd Workshop on Evaluation and Comparison for NLP systems (Eval4NLP), co-located at the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP2021), invites the submission of long and short papers, with a theoretical or experimental nature, describing recent advances in system evaluation and comparison in NLP.
The GEM (Generation, Evaluation, Metrics) workshop at ACL, 2021 is inviting transformation submissions to NL-Augmenter.
--------------------------------------------
First Call for Papers
--------------------------------------------
The 2nd Workshop on Evaluation and Comparison for NLP systems (Eval4NLP), co-located at the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP2021), invites the submission of long and short papers, with a theoretical or experimental nature, describing recent advances in system evaluation and comparison in NLP.
--------------------------------------------
First Call for Participation
--------------------------------------------
The 2nd Workshop on "Evaluation & Comparison of NLP Systems" (co-located with EMNLP 2021) organizes a shared task on Explainable Quality Estimation. The call for participation is described below. For more details, please visit our shared task website -- https://eval4nlp.github.io/sharedtask.html
SemEval-2022: International Workshop on Semantic Evaluation
https://semeval.github.io/
We invite proposals for tasks to be run as part of SemEval-2022. SemEval (the International Workshop on Semantic Evaluation) is an ongoing series of evaluations of computational semantics systems, organized under the umbrella of SIGLEX, the Special Interest Group on the Lexicon of the Association for Computational Linguistics.
Final call for papers and shared task submissions for Workshop on Generation, Evaluation, and Metrics (GEM) at ACL ’21
=========
Call for Participation
=========
Update April 22: Our Paper submission deadline has been extended to May 3! Please submit your papers at the SoftConf link listed below. The shared task submission deadline is May 14.