3rd CfP: The 2nd Workshop on Continuous Vector Space Models and their Compositionality (deadline extended)

Event Notification Type: 
Call for Papers
Abbreviated Title: 
Campus Johanneberg of Chalmers University of Technology
Sunday, 27 April 2014
Alexandre Allauzen
Raffaella Bernardi
Edward Grefenstette
Hugo Larochelle
Christopher Manning
Scott Wen-tau Yih
Submission Deadline: 
Thursday, 30 January 2014

In recent years, there has been a growing interest in algorithms that learn and use continuous representations for words, phrases, or documents in many natural language processing applications. Among many others, influential proposals that illustrate this trend include latent Dirichlet allocation, neural network based language models and spectral methods. These approaches are motivated by improving the generalization power of the discrete standard models, by dealing with the data sparsity issue and by efficiently handling a wide context. Despite the success of single word vector space models, they are limited since they do not capture compositionality. This prevents them from gaining a deeper understanding of the semantics of longer phrases or sentences.

With the growing popularity of these neural and probabilistic methods of language processing, the scope of this second workshop is extended to theoretical and conceptual questions regarding:

  • their relation to unsupervised distributional representations,
  • the encompassing of the compositional aspects of formal models of semantics,
  • the role of linguistic theory in the design and development of these methods.

Some such pertinent questions include: Should phrase representations and word representations be of the same sort? Could different linguistic levels require different modelling approaches? Is compositionality determined by syntax, and if so, how do we learn/define it? Should word representations be fixed and obtained distributionally, or should the encoding be variable? Should word representations be task-specific, or should they be general?

In this workshop, we invite submissions of papers on continuous vector space models for natural language processing. Topics of interest include, but are not limited to:

  • learning algorithms for continuous vector space models,
  • their compositionality,
  • their use in NLP applications,
  • spectral learning for NLP,
  • neural networks for NLP,
  • phrase, sentence, and document-level distributional representations,
  • tensor models,
  • distributed semantic representations,
  • the role of syntax in compositional models,
  • formal and distributional semantic models.


The workshop will showcase presentations from two invited speakers:

  • Geoffrey Zweig (Microsoft Research)
  • Ivan Titov (University of Amsterdam, Netherlands)


Authors should submit a full paper of up to 8 pages in electronic, PDF format, with up to 2 additional pages for references. The reported research should be substantially original. The papers will be presented orally or as posters. All submissions must be in PDF format and must follow the EACL 2014 formatting requirements (http://www.eacl2014.org/files/eacl-2014-styles.zip). Reviewing will be double-blind, and thus no author information should be included in the papers; self-reference should be avoided as well. Submissions must be made through the Softconf website set up for this workshop:


Accepted papers will appear in the workshop proceedings, where no distinction will be made between papers presented orally or as posters.


  • January 30th, 2014: Submission deadline (extended)
  • February 20th, 2014: Notification of acceptance
  • March 3rd, 2014: Camera-ready deadline
  • April 27th, 2014: Workshop



  • Nicholas Asher (IRIT-Toulouse)
  • Marco Baroni (University of Trento)
  • Yoshua Bengio (Université de Montréal)
  • Gemma Boleda (University of Texas)
  • Antoine Bordes (Université Technologique de Compiègne)
  • Johan Bos (University of Groningen)
  • Léon Bottou (Microsoft Research)
  • Xavier Carreras (Universitat Politècnica de Catalunya)
  • Lucas Champollion (New-York University)
  • Stephen Clark (University of Cambridge)
  • Shay Cohen (Columbia University)
  • Ido Dagan (Bar Ilan University)
  • Ronan Collobert (IDIAP Research Institute, Switzerland)
  • Pino Di Fabbrizio (Amazon)
  • Georgiana Dinu (University of Trento)
  • Kevin Duh (Nara Institute of Science and Technology)
  • Dean Foster (University of Pennsylvania)
  • Alessandro Lenci (University of Pisa)
  • Louise McNally (Universitat Pompeu Fabra)
  • Fabio Massimo Zanzotto (Università degli Studi di Roma)
  • Mirella Lapata (University of Edinburgh)
  • Andriy Mnih (Gatsby Computational Neuroscience Unit)
  • Larry Moss (Indiana University)
  • Diarmuid Ó Seaghdha (University of Cambridge)
  • Sebastian Pado (Universität Stuttgart)
  • Martha Palmer (University of Colorado)
  • John Platt (Microsoft Research)
  • Maarten de Rijke (University of Amsterdam)
  • Mehrnoosh Sadrzadeh (University of London)
  • Mark Steedman (University of Edinburgh)
  • Chung-chieh Shan (Indiana University)
  • Peter Turney (NRC)
  • Jason Weston (Google)
  • Guillaume Wisniewski (LIMSI-CNRS/Université Paris-Sud)