The Responsible Language Models (ReLM) workshop focuses on both the theoretical and practical challenges related to the design and deployment of responsible Language Models (LMs) and will have strong multidisciplinary components, promoting dialogue and collaboration in order to develop more trustworthy and inclusive technology. We invite discussions and research on key topics such as bias identification & quantification, bias mitigation, transparency, privacy & security issues, hallucination, uncertainty quantification, and various other risks in LMs.
Topics: We are interested, but not limited to the following topics: explainability and interpretability techniques for different LLMs training paradigms; privacy, security, data protection and consent issues for LLMs; bias and fairness quantification, identification, mitigation and trade-offs for LLMs; robustness, generalization and shortcut learning analysis and mitigation for LLMs; uncertainty quantification and benchmarks for LLMs; ethical AI principles, guidelines, dilemmas and governance for responsible LLM development and deployment.
- About the ACL
- News
- Journals
- Conferences
- Events
- ACL Fellows
- SIGs
- Anthology
- Wiki
- Education
- Policies
- Archives