*NeuralGen 2019* Deadline Extension: March 13 - Methods for Optimizing and Evaluating Neural Language Generation – NAACL 2019

Event Notification Type: 
Call for Papers
Abbreviated Title: 
NeuralGen 2019
Location: 
NAACL
Thursday, 6 June 2019 to Friday, 7 June 2019
State: 
MN
Country: 
USA
Contact Email: 
City: 
Minneapolis
Contact: 
Antoine Bosselut
Submission Deadline: 
Wednesday, 13 March 2019

******************************************************
DEADLINE EXTENSION: MARCH 13

We have extended the submission deadline to March 13!
******************************************************

NeuralGen 2019: Workshop on Methods for Optimizing and Evaluating Neural Language Generation

Submission deadline: March 11, 2019
Workshop date: June 6, 2019 - Minneapolis, USA (co-located with NAACL)
http://neuralgen.io

=======================================================================================
Description:

Advances in training deep neural networks for sequence modeling have led to an active and rising interest in language generation research. New approaches based on deep learning have allowed the community to tackle and achieve impressive results on numerous tasks such as machine translation [1], summarization [2], program generation [3] and more [4,5]. However, prevailing neural methods remain imperfect. Conditional language modeling losses are unable to capture global context [6]. Evaluation metrics are uncorrelated with human judgements leading to bias against expected model functionality [7], while high quality human evaluations remain costly. Effective transfer across tasks, languages and domains holds promise [8, 9], but remains only preliminarily explored.

We invite members of the community to participate in a discussion on addressing these issues and to contribute works that explore the next frontiers in neural language generation. We propose this workshop to explore commonalities and differences in various approaches to fixing the issues of neural language generation and are excited to receive submissions on the following topics:

- Modeling Advancements:
Beyond maximum likelihood training (eg: risk loss, reinforcement learning objectives, variational approaches, adversarial training, pretrained discriminators, other novel loss functions)
Unsupervised, weakly supervised, and semi-supervised language generation
Editing models
Mixing neural and template-based generation
Human-in-the-loop learning
Beyond teacher-forcing (beam search during training, non-autoregressive generation)

- Evaluation:
New automatic metrics for evaluating different characteristics of coherent language
Evaluation using pretrained models
Proposing better human evaluation strategies

- Generalization:
Transfer learning (unsupervised pre-training for generation, low-resource generation, domain adaptation)
Multi-task learning
Model distillation

- Analysis:
Model analysis, interpretability and/or visualizations
Error analysis of machine-generated language
Analysis of evaluation metrics
Benefits/drawbacks of different loss functions

=======================================================================================
Invited speakers:

We are happy to announce an exciting list of invited speakers (to be finalized):

Kyunghyun Cho New York University, Facebook AI Research
Yejin Choi University of Washington, Allen Institute for AI
Hal Daumé III University of Maryland, Microsoft Research
He He New York University, Amazon Web Services
Graham Neubig Carnegie Mellon University
Alexander Rush Harvard University

=======================================================================================
Important Dates:

Call For Papers: January 7, 2019
Deadline for submission: *March 13, 2019*
Notification of acceptance: March 29, 2019
Deadline for camera-ready version: April 5, 2019
Workshop Date: TBD: June 6 or 7, 2019

Note: All deadlines are are 11:59 PM GMT -12 (anywhere-on-earth time)

=======================================================================================
Financial Assistance:

Partial financial Assistance will be available to authors of papers who demonstrate significant financial need. Financial assistance allocations will be made available after the accepted submissions are announced and BEFORE the workshop date.

=======================================================================================
Submission Guidelines:

Submissions may contain between 4 and 8 pages of content, with unlimited references. They will undergo the double-blind review process and so must not identify authors or their affiliations. Please format your papers using the standard NAACL style files. Submissions on work published or submitted elsewhere are also permitted. Already published work will not be subject to the review process, will not be included in the workshop proceedings, and acceptance will be conditioned on relevance to the goals of the workshop. Presentation format and schedule will be announced before the camera-ready deadline.

Clarification about non-archival submissions: Submissions that authors wish to be non-archival or excluded from official proceedings are also permitted. The submission page will have an option for authors to select to be excluded from official workshop proceedings.

Submit via the Softconf START system: https://www.softconf.com/naacl2019/neuralgen/

=======================================================================================
References:

[1] Vaswani, Ashish, et al. "Attention is all you need." In NIPS. 2017.
[2] Celikyilmaz, Asli, et al. "Deep communicating agents for abstractive summarization." In NAACL. 2018.
[5] Ling, Wang, et al. "Latent predictor networks for code generation." In ACL. 2016.
[4] Bosselut, Antoine, et al. "Discourse-Aware Neural Rewards for Coherent Text Generation." In NAACL. 2018.
[5] Wiseman, Sam, Stuart M. Shieber, and Alexander M. Rush. "Challenges in data-to-document generation." In EMNLP. 2017.
[6] Holtzman, Ari, et al. "Learning to Write with Cooperative Discriminators." In ACL. 2018.
[7] Novikova, Jekaterina, et al. “Why We Need New Evaluation Metrics for NLG.” In EMNLP. 2017.
[8] Ramachandran, Prajit, et al. “Unsupervised Pretraining for Sequence to Sequence Learning.” In EMNLP. 2017.
[9] Gu, Jiatao, et al. “Meta-Learning for Low-Resource Neural Machine Translation.” In EMNLP. 2018.

=======================================================================================