The tutorial “Uncertainty Quantification for Large Language Models” will return for its second edition at AAAI 2026 in Singapore. Following its debut at ACL 2025 in Vienna - where it attracted more than 300 participants - the tutorial continues to respond to the growing need for developing reliable, robust, and trustworthy large language models.
This edition retains the core conceptual foundations of uncertainty estimation while placing greater emphasis on emerging and rapidly advancing research directions. Covered topics include:
- Uncertainty estimation for reasoning-focused large language models
- Test-time scaling and reliability-aware decoding techniques
- Hallucination detection for large vision–language models (LVLMs) with multimodal capabilities
The tutorial is intended for researchers, practitioners, and students in natural language processing, machine learning, and AI safety who seek a comprehensive understanding of uncertainty in large language models. Participants will be introduced to state-of-the-art methodologies, practical applications, and key open challenges that shape the future of uncertainty-aware language technologies.
We look forward to welcoming you in Singapore and engaging in an in-depth discussion on uncertainty in LLMs.