The 3rd Workshop on Evaluation and Comparison of NLP Systems (Eval4NLP 2022)

Event Notification Type: 
Call for Papers
Abbreviated Title: 
Eval4NLP 2022
Location: 
Online
Sunday, 20 November 2022
Contact Email: 
Contact: 
Daniel Deutsch
Juri Opitz
Can Udomcharoenchaikit
Submission Deadline: 
Saturday, 10 September 2022

Fair evaluations and comparisons are of fundamental importance to the NLP community to properly track progress, especially within the current deep learning revolution, with new state-of-the-art results reported in ever shorter intervals. This concerns the creation of benchmark datasets that cover typical use cases and blind spots of existing systems, the designing of metrics for evaluating the performance of NLP systems on different dimensions, and the reporting of evaluation results in an unbiased manner.

Although certain aspects of NLP evaluation and comparison have been addressed in previous workshops (e.g., Metrics Tasks at WMT, NeuralGen, NLG-Evaluation, and New Frontiers in Summarization), we believe that new insights and methodology, particularly in the last 2-3 years, have led to much renewed interest in the workshop topic. The first workshop in the series, Eval4NLP’20 (collocated with EMNLP’20), was the first workshop to take a broad and unifying perspective on the subject matter. The second workshop, Eval4NLP’21 (collocated with EMNLP’21) extended this perspective. We believe the third workshop will continue the tradition and become a reputed platform for presenting and discussing latest advances in NLP evaluation methods and resources.

------------------------
Important Dates
------------------------

- Direct submission to Eval4NLP deadline: August 8, 2022
- Submission of papers and reviews from ARR to Eval4NLP: September 10, 2022
- Notification of acceptance: September 25, 2022
- Camera-ready papers due: October 10, 2022
- Workshop day: November 24, 2022

All deadlines are 11.59 pm UTC -12h (“Anywhere on Earth”)

The submisson site (OpenReview) is available here: https://openreview.net/group?id=aclweb.org/AACL-IJCNLP/2022/Workshop/Eva...

----------
Topics
----------

We invite submissions on topics that include, but are not limited to, the following:

**Designing evaluation metrics**

- Metrics with desirable properties, e.g., high correlations with human judgments, strong in distinguishing high-quality outputs from mediocre and low-quality outputs, robust across lengths of input and output sequences, efficient to run, etc.;
- Reference-free evaluation metrics, which only require source text(s) and system predictions;
- Cross-domain metrics, which can reliably and robustly measure the quality of system outputs from heterogeneous modalities (e.g., image and speech), different genres (e.g., newspapers, Wikipedia articles and scientific papers) and different languages;
- Cost-effective methods for eliciting high-quality manual annotations; and Methods and metrics for evaluating interpretability and explanations of NLP models

**Creating adequate evaluation data or analyzing existing data**

- Studying coverage and diversity, e.g., size of the corpus, covered phenomena, representativeness of samples, distribution of sample types, variability among data sources, eras, and genres; and
- Studying quality of annotations, e.g., consistency of annotations, inter-rater agreement, and bias check

**Reporting correct results**

- Ensuring and reporting statistics for the trustworthiness of results, e.g., via appropriate significance tests, and reporting of score distributions rather than single-point estimates, to avoid chance findings;
- Ensuring and reporting on reproducibility of experiments, e.g., quantifying the reproducibility of papers and issuing reproducibility guidelines; and
- Providing comprehensive and unbiased error analyses and case studies, avoiding cherry-picking and sampling bias.

If you have any questions, do not hesitate to contact us at: eval4nlp@gmail.com