SemEval-3 : 6th International Workshop on Semantic Evaluations

Event Notification Type: 
Call for Proposals
Abbreviated Title: 
Semeval
Contact: 
suresh@cs.york.ac.uk
dyuret@ku.edu.tr
Submission Deadline: 
Tuesday, 26 April 2011

Final Call for Task Proposals

The SemEval Programme committee invites full proposals for tasks to be run as part of SemEval-3. We will primarily be interested in new task proposals that are different from already submitted outline proposals or improve upon these. Please see the SemEval-3 website [click 'Task Descriptions' link] for a summary of currently submitted outline proposals.

We welcome tasks that can test an automatic system for semantic analysis of text, be it application dependent or independent. We especially welcome tasks for different languages and cross-lingual tasks.

For SemEval-3 we particularly encourage the following aspects in task design:

Reuse of existing annotations and training data

The previous SemEval and Senseval workshops have generated large collections of annotated data. In addition, rich semantically annotated datasets are readily available such as Ontonotes, ANC, FrameNet etc. To reduce the burden on task organisers we encourage reuse of existing resources. Where necessary task organisers should create new test datasets while reusing previous training datasets. We realise that this may not be always feasible and for novel tasks it may be necessary to create new annotations both for training and testing purposes.

Common data formats

To ensure that newer annotations conform to existing annotation standards, we encourage use of existing data encoding standards such as MASC and UIMA. Where possible reusing existing annotation standards and tools will make it easier to participate in multiple tasks. In addition, use of readily available tools should make it easier for participants to spot bugs and improve their systems.

Common texts and multiple annotations

For many tasks finding suitable texts for building training and testing datasets in itself can be a challenge or somewhat ad hoc. To make it easier for task organisers to find suitable texts we encourage use of resources such as Wikipedia, ANC and Ontonotes. Where this makes sense the SemEval program committee will encourage task organisers to share the same texts for different tasks. In due time, we hope that this process will allow generation of multiple semantic annotations for the same text.

Umbrella tasks

To reduce fragmentation of similar tasks, we will encourage task organisers to propose larger tasks that includes several subtasks. For example, Word Sense Induction in Japanese and Word Sense Induction in English could be combined into a single umbrella task that includes several subtasks. We welcome task proposals for such larger tasks. In addition, the program committee will actively encourage task organisers proposing similar tasks to combine their efforts into larger umbrella tasks.

Application oriented tasks

We will welcome tasks that are devoted to developing novel applications of computational semantics. As an analogy, the TREC Question-Answering (QA) track was solely devoted to building QA systems to compete with current IR systems. Similarly, we will encourage tasks that have a clearly defined end-user application showcasing and enhancing our understanding of computational semantics and extending the current state-of-the-art.

IMPORTANT DATES

April 26, 2011 Final task proposals due
July 15, 2011 Completion of corpus selection [TBC]
August 30, 2011 Trial data completed [for task organisers] [TBC]
September 7, 2011 Call for participation [TBC]
April 10, 2012 Full Training Data available for participants [TBC]
November 1, 2012 onwards Start of evaluation period [Task Dependent]
February 1, 2013 End of Evaluation Period [TBC]
March 1, 2013 Paper submission deadline [TBC]

Summer 2013 Workshop co-located with ACL or NAACL [TBC]

SUBMISSION DETAILS

Full task proposals for tasks will ideally contain:

* A description of the task (max 1 page)
* If related to an existing outline proposal then indication of how it is different or improves upon the existing outline proposal. SemEval-3 website [click 'Task Descriptions' link] contains details of existing outline task proposals.
* How the training/testing data will be built and/or procured. How does it fit in with corpora sharing recommendations - see SemEval-3 website for details [click 'Task Corpora Sharing' link].
* The evaluation methodology to be used, including clear evaluation criteria
* The anticipated availability of necessary resources to the participants (copyright, etc)
* The resources required to prepare the task (computation and annotation time, etc)

If you are not yet at a point to provide outlines of all of these, that is acceptable, but please give some thought to each, and present a sketch of your first ideas. We will gladly give feedback.

Please submit proposals as soon as possible, preferably by electronic mail in plain ASCII text to the SemEval-3 email address:

semeval [at] cs.york.ac.uk

CHAIRS

Suresh Manandhar, University of York, UK
Deniz Yuret, KoƧ University, Turkey

SemEval-3 DISCUSSION GROUP

Please join our discussion group at

semeval3 [at] googlegroups.com

to receive announcements and participate in discussions.

SemEval-3 WEBSITE

http://www.cs.york.ac.uk/semeval

SemEval-3 Wiki

http://aclweb.org/aclwiki/index.php?title=SemEval_3