ACL-HLT 2011 workshop on Distributional Semantics and Compositionality (DiSCo)
Call for Papers
Any NLP system that does semantic processing relies on the assumption of semantic compositionality: the meaning of a phrase is determined by the meanings of its parts and their combination. However, this assumption does not hold for lexicalized phrases such as idiomatic expressions, which causes pain points not only for semantic, but also for syntactic processing, see (Sag et al. 2001). In particular, while distributional methods in semantics have proved to be very efficient in tackling a wide range of tasks in natural language processing, e.g., document retrieval, clustering and classification, question answering, query expansion, word similarity, synonym extraction, relation extraction, textual advertisement matching in search engines, etc. (see Turney and Pantel 2010 for a detailed overview), they are still strongly limited by being inherently word-based. While dictionaries and other lexical resources contain multiword entries, these are expensive to obtain, not available for all languages to a sufficient extent, the definition of a multiword varies across resources and non-compositional phrases are merely a subclass of multiwords. The workshop brings together researchers that are interested in extracting non-compositional phrases from large corpora by applying distributional models that assign a graded compositionality score to a phrase as well as researchers interested in expressing compositional meaning with such models. This score denotes the extent to which the compositionality assumption holds for a given expression. The latter can be used, for example, to decide whether the phrase should be treated as a single unit in applications. We emphasize that the focus is on automatically acquiring semantic compositionality. Approaches that employ prefabricated lists of non-compositional phrases should consider a different venue.
This event consists of a main session and a shared task.
For the main session, we invite submission of papers on the topic of automatically acquiring a model for semantic compositionality. This includes, but is not limited to:
• Models of Distributional Similarity
• Graph-based models over word spaces
• Vector-space models for distributional semantics
• Applications of semantic compositionality
• Evaluation of semantic compositionality
Authors are invited to submit papers on original, unpublished work in the topic area of this workshop. In addition to long papers presenting completed work, we also invite short papers and demos:
- Long papers should present completed work and should not exceed 8 pages plus 1 page of references
- Short papers/demos can present work in progress or the description of a system and should not exceed 4 pages plus 1 page of references.
As reviewing will be blind, please ensure that papers are anonymous. The papers should not include the authors' names and affiliations or any references to web sites, project names etc. revealing the authors' identity.
The organizers will extract about 1000 candidate phrases from three large-scale freely available web-corpora, UkWaC, DeWaC and ItWaC (cf. http://wacky.sslmit.unibo.it/), containing respectively English, German and Italian POS tagged text (approximately 300 phrases in each language). The candidate phrases will be equally divided by syntactic/semantic relation (about 100 phrases in each language for each relation: Adjective-Noun, Subject-Verb and Verb-Object).
These data will be annotated (rated) by mother-tongue speakers for semantic compositionality on a scale between 0 and 100. For example, "hot dog" will have a rating close to 0 since it is non-compositional, "red car" will have a rating close to 100. The annotators will be recruited through web-based services such as Amazon Mechanical Turk or similar tools. The organizers have previously carried out similar annotation work and a web-platform for annotation collection and (psycho-)linguistic experiments has already been developed.
Participants of the task are free to choose whatever method and data resources they will use in their submission, except that prefabricated lists of multiwords will not be allowed. However, since the data set is derived from the WaCkY corpora, participants are strongly encouraged to use these freely available text collections to build their models of compositionality, thus ensuring the highest possible comparability of results. Furthermore, since the WaCkY corpora are provided already POS-tagged and lemmatized, the workload on the participants' side is considerably reduced. This information (POS tags and lemmatization) may or may not be used by the participants. If needed, additional linguistic annotations or processing may also be added to the corpora.
The data will be split into 40% training, 10% validation and 50% test. The training and validation portions will be made available to the participants, together with a scoring infrastructure. For the challenge, participants submit their system's output on the test set to the task organizers, who score the systems and provide the official scores. There will be two scoring methods: a) measuring the mean error by computing the average difference of system score and test data score and b) binning the test data scores into three grades of compositionality (non-compositional, somewhat compositional, compositional), ordering the system output by score and optimally mapping the system output to the three bins. The motivation for a) is to reproduce the training data scores, the motivation for b) is to give credit to systems that order the phrases correctly by compositionality but scale scores differently -- something that is easily 'fixed' in applications by appropriate thresholds.
Participants further submit a 4 page system description for publication in the workshop volume.
Apr 01, 2011 Submission deadline
Apr 25, 2011 Notification of acceptance
May 06, 2011 Camera-ready deadline
June 24 , 2011 Post-ACL Workshop
• Enrique Alfonseca, Google Research, Switzerland
• Tim Baldwin, University of Melbourne, Australia
• Marco Baroni, University of Trento, Italy
• Paul Buitelaar, National University of Ireland, Ireland
• Chris Brockett, Microsoft Research, Redmond, US
• Tim van de Cruys, INRIA, France
• Stefan Evert, University of Osnabrück, Germany
• Antske Fokkens, Saarland University, Germany
• Silvana Hartmann, TU Darmstadt, Germany
• Alfio Massimiliano Gliozzo, IBM, Hawthorne, NY, USA
• Mirella Lapata, University of Edinburgh, UK
• Ted Pedersen, University of Minnesota, Duluth, USA
• Yves Peirsman, Stanford University, USA
• Peter D. Turney, National Research Council Canada, Canada
• Magnus Sahlgren, Gavagai, Sweden
• Sabine Schulte im Walde, University of Stuttgart, Germany
• Serge Sharoff, University of Leeds, UK
• Anders Søgaard, University of Copenhagen, Denmark
• Daniel Sonntag, German Research Center for AI, Germany
• Diana McCarthy, Lexical Computing Ltd., UK
• Dominic Widdows, Google, USA
• Chris Biemann, San Francisco, USA
• Eugenie Giesbrecht, FZI Research Center at the University of Karlsruhe, Germany
• Emiliano Guevara, Institute for Linguistics and Scandinavian Studies, University of Oslo, Norway