The goal of this workshop is to discuss new methods for language generation that address some of the recurring problems in existing language generation techniques (eg. bland, repetitive language) as well as novel techniques for robustly evaluating and interpreting model output.
We are accepting papers in the following areas:
- Novel architectures and new approaches to training models:
Beyond maximum likelihood training (eg: risk loss, reinforcement learning objectives, variational approaches, adversarial training, pretrained discriminators, other novel loss functions), unsupervised, weakly supervised, and semi-supervised language generation, editing models, mixing neural and template-based generation, human-in-the-loop learning, beyond teacher-forcing (beam search during training, non-autoregressive generation).
- Evaluation:
New automatic metrics for evaluating different characteristics of coherent language, evaluation using pretrained models, proposing better human evaluation strategies.
- Generalization:
Transfer learning (unsupervised pre-training for generation, low-resource generation, domain adaptation), multi-task learning, model distillation.
- Analysis:
Model analysis, interpretability and/or visualizations, error analysis of machine-generated language, analysis of evaluation metrics, benefits/drawbacks of different loss functions.