NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following

Event Notification Type: 
Call for Papers
Abbreviated Title: 
Location: 
New Orleans Ernest N. Morial Convention Center
Friday, 15 December 2023
State: 
Louisiana
Country: 
United States
City: 
New Orleans
Contact: 
Qinyuan Ye
Yizhong Wang
Shayne Longpre
Yao Fu
Daniel Khashabi
Submission Deadline: 
Sunday, 1 October 2023

We are excited to announce the first Workshop on Instruction Tuning and Instruction Following at NeurIPS 2023!

We organize this workshop to facilitate discussions on advancing instruction tuning methodologies and constructing general-purpose instruction-following models. We believe it is crucial to organize this workshop due to the prevalence of proprietary models with restricted access, thereby creating the need for an open platform to encourage discussions. Moreover, we aim to foster interdisciplinary collaboration by bringing together researchers from diverse fields such as natural language processing, computer vision, robotics, human-computer interaction, AI safety, among others, to share their latest findings and explore potential avenues for future research.

Centering on “instructions,” we invite submissions covering various topics, including but not limited to the list below:

* Modeling: algorithms and pipelines for learning from instructions and human feedback; designing training objectives and rewards; training and inference efficiency
* Data Collection: crowd-sourcing; synthetic data generation; data democratization
* Evaluation and Oversight: effective and reliable oversight over existing models; enforcing guardrails and guarantees for model behaviors; interpretability and analysis
* Engineering and Open-sourcing: best practice in training, evaluation and deployment; open-sourcing efforts; openness and reproducibility
* Applications: long-context, multi-round and personalized instruction-following models
* Multimodal and Multidisciplinary: instruction following models for computer vision, robotics, games, art, etc.
* Limitations, Risks and Safety: bias and fairness; factuality and hallucination; safety concerns arising from instruction-following models
* Other adjacent research topics (e.g., in-context learning, prompting, multi-task learning) that enable better responses to instructions in dynamic environments