August 08, 2022 | BY opitz
Contact:
Daniel Deutsch
Can Udomcharoenchaikit
Juri Opitz
The 3rd Workshop on Evaluation and Comparison for NLP systems (Eval4NLP, https://eval4nlp.github.io/2022), co-located at the 2022 Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics (AACL 2022, https://www.aacl2022.org/), invites the submission of long and short papers, with a theoretical or experimental nature, describing recent advances in system evaluation and comparison in NLP.
May 30, 2022 | BY opitz
Contact:
Daniel Deutsch
Juri Opitz
Can Udomcharoenchaikit
Fair evaluations and comparisons are of fundamental importance to the NLP community to properly track progress, especially within the current deep learning revolution, with new state-of-the-art results reported in ever shorter intervals. This concerns the creation of benchmark datasets that cover typical use cases and blind spots of existing systems, the designing of metrics for evaluating the performance of NLP systems on different dimensions, and the reporting of evaluation results in an unbiased manner.
February 11, 2022 | BY b.ross
Location:
Co-located with ICWSM 2022
Contact:
Björn Ross
Roberto Navigli
Agostina Calabrese
The automatic or semiautomatic analysis of textual data is a key approach to analyse the massive amounts of user-generated content online, from the identification of sentiment in text and topic classification to the detection of abusive language, misinformation or propaganda. However, the development of such systems faces a crucial challenge.
July 17, 2021 | BY plkumjorn
Event Dates:
10 Nov 2021 to 11 Nov 2021
Location:
in conjunction with EMNLP 2021
--------------------------------------------
Second Call for Papers
--------------------------------------------
The 2nd Workshop on Evaluation and Comparison for NLP systems (Eval4NLP), co-located at the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP2021), invites the submission of long and short papers, with a theoretical or experimental nature, describing recent advances in system evaluation and comparison in NLP.
July 05, 2021 | BY smille
Contact:
Kaustubh Dole
Sebastian Gehrmann
The GEM (Generation, Evaluation, Metrics) workshop at ACL, 2021 is inviting transformation submissions to NL-Augmenter.
June 14, 2021 | BY plkumjorn
Event Dates:
10 Nov 2021 to 11 Nov 2021
Location:
in conjunction with EMNLP 2021
--------------------------------------------
First Call for Papers
--------------------------------------------
The 2nd Workshop on Evaluation and Comparison for NLP systems (Eval4NLP), co-located at the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP2021), invites the submission of long and short papers, with a theoretical or experimental nature, describing recent advances in system evaluation and comparison in NLP.
May 25, 2021 | BY plkumjorn
Location:
in conjunction with EMNLP 2021
--------------------------------------------
First Call for Participation
--------------------------------------------
The 2nd Workshop on "Evaluation & Comparison of NLP Systems" (co-located with EMNLP 2021) organizes a shared task on Explainable Quality Estimation. The call for participation is described below. For more details, please visit our shared task website -- https://eval4nlp.github.io/sharedtask.html
February 06, 2021 | BY Guy Emerson
Location:
Co-located with a major NLP conference in 2022
Contact:
Nathan Schneider
Alexis Palmer
Guy Emerson
Natalie Schluter
SemEval-2022: International Workshop on Semantic Evaluation
https://semeval.github.io/
We invite proposals for tasks to be run as part of SemEval-2022. SemEval (the International Workshop on Semantic Evaluation) is an ongoing series of evaluations of computational semantics systems, organized under the umbrella of SIGLEX, the Special Interest Group on the Lexicon of the Association for Computational Linguistics.
February 03, 2021 | BY sgehrmann
Contact:
Sebastian Gehrmann
Antoine Bosselut
Esin Durmus
Varun Prashant Gangal
Yacine Jernite
Laura Perez-Beltrachini
Samira Shaikh
Wei Xu
Final call for papers and shared task submissions for Workshop on Generation, Evaluation, and Metrics (GEM) at ACL ’21
=========
Call for Participation
=========
Update April 22: Our Paper submission deadline has been extended to May 3! Please submit your papers at the SoftConf link listed below. The shared task submission deadline is May 14.
May 13, 2019 | BY dirkh1981
Aggregating and analysing crowdsourced annotations for NLP (AnnoNLP)
The workshop is hosted by EMNLP-IJCNLP 2019 which will take place in Hong Kong. Full details here: http://dali.eecs.qmul.ac.uk/annonlp
Pages