Call for papers
Large language models (LLMs) are increasingly deployed in real-world applications, raising urgent concerns about the values they encode, reflect, and prioritize. While value alignment has become a central topic in AI research, existing studies have largely focused on generic safety or alignment to a single core value (i.e., value monism), leaving the challenge of pluralistic value alignment underexplored. In practice, human values are diverse and context-dependent. Building LLMs that can recognize, reason about, and align with pluralistic values is therefore both a technical and societal challenge.
The First Workshop on Pluralistic Value Alignment of LLMs (PlurVA-LLM) aims to provide a dedicated venue for advancing research on this emerging topic. The workshop will bring together researchers from NLP, machine learning, AI safety, social science, philosophy, and related fields to discuss the foundations, methods, evaluation, and applications of pluralistic value alignment in LLMs.
Topics of interest
We welcome submissions on topics including, but not limited to:
1. Theoretical foundations and formalizations of pluralistic value alignment;
2. Alignment methods for pluralistic values in LLMs;
3. Benchmarks and evaluation protocols for pluralistic value alignment;
4. Human-AI collaboration for constructing and curating value-sensitive datasets;
5. Interpretability and analysis of value alignment in LLMs;
6. Pluralistic value alignment in downstream applications and real-world deployment;
7. Multilingual, multicultural, and low-resource perspectives on value alignment;
8. Pluralistic value alignment in multimodal models and systems;
The workshop will also host a shared task on evaluating LLMs' value alignment and normative reasoning across adversarial, daily, and principle-driven settings.
Organizers
Deyi Xiong, Tianjin University
António Branco, University of Lisbon
Hongming Zhang, Macau University of Science and Technology
Yue Dong, University of California, Riverside
Benyou Wang, The Chinese University of Hong Kong, Shenzhen
Wenxuan Zhang, Singapore University of Technology and Design (SUTD)
Li Zhou, The Chinese University of Hong Kong, Shenzhen
Jingting Zheng, Tianjin University
For questions, please contact plurvallm2026 [at] outlook.com