March 07, 2024 | BY wzaghouani
Abbreviated Title:
Shared Task on News Media Narratives (FIGNEWS 2024)
Calling all NLP, Digital Humanities and media analysis enthusiasts! Participate in the "Framing the Israel War on Gaza" (FIGNEWS) shared task and play a pivotal role in shaping media narrative research. Engage in creating guidelines, annotating a diverse multilingual corpus, and pushing the boundaries of NLP!
Task Highlights:
1. Guidelines Creation: Craft comprehensive annotation guidelines and set a benchmark in NLP research.
January 25, 2024 | BY nlgcat
Location:
Part of HumEval at LREC-COLING
Contact:
Anya Belz
Craig Thomson
Ehud Reiter
January 24, 2024 | BY oscar.sainz
Contact:
Oscar Sainz
Iker García Ferrero
Eneko Agirre
Jon Ander Campos
Alon Jacovi
Yanai Elazar
Yoav Goldberg
We invite you to participate and submit your work to the First Workshop on Data Contamination (CONDA) co-located with ACL 2024 in Bangkok, Thailand.
August 19, 2023 | BY opitz
Contact:
Daniel Deutsch
Rotem Dror
Juri Opitz
the 4th Workshop on Evaluation and Comparison for NLP systems (Eval4NLP), co-located at the 2023 Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics (AACL 2023), invites the submission of long and short papers, with a theoretical or experimental nature, describing recent advances in system evaluation and comparison in NLP.
August 15, 2023 | BY opitz
Contact:
Daniel Deutsch
Rotem Dror
Juri Opitz
The 4th Workshop on Evaluation and Comparison for NLP systems (Eval4NLP), co-located at the 2023 Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics (AACL 2023), invites the submission of long and short papers, with a theoretical or experimental nature, describing recent advances in system evaluation and comparison in NLP.
August 07, 2023 | BY opitz
Abbreviated Title:
LLMs as Explainable Metrics
Contact:
christoph.leiter@uni-bielefeld.de
opitz@cl.uni-heidelberg.de
steffen.eger@uni-bielefeld.de
Dear colleagues,
you are invited to participate in the Eval4NLP 2023 shared task on **Prompting Large Language Models as Explainable Metrics**.
Please find more information below and on the shared task webpage: https://eval4nlp.github.io/2023/shared-task.html
Important Dates
July 18, 2023 | BY opitz
Contact:
Daniel Deutsch
Rotem Dror
Juri Opitz
Dear Colleagues,
the 4th Workshop on Evaluation and Comparison for NLP systems (Eval4NLP), co-located at the 2023 Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics (AACL 2023), invites the submission of long and short papers, with a theoretical or experimental nature, describing recent advances in system evaluation and comparison in NLP.
** Important Dates **
All deadlines are 11.59 pm UTC -12h (“Anywhere on Earth”).
May 25, 2023 | BY Anya Belz
ReproNLP 2023: First Call for Participation
Background
May 16, 2023 | BY Eleftherios Avramidis
Contact:
Ondrej Bojar
Eleftherios Avramidis
Dear all,
the “Test suites” sub-task will be included for the sixth time in the General MT Shared Task of the Conference on Machine Translation (WMT23).
*OVERVIEW*
Test suites are custom extensions to the test sets of the General MT Shared Task, constructed so that they can focus on particular aspects of the MT output. They cοnsist of a source-side test-set and a customized evaluation service. As opposed to the standard evaluation process which produces generic quality scores, test suites often produce separate fine-grained results for each phenomenon.
February 13, 2023 | BY b.ross
Pages