AAAI 2025 Tutorial:
Hallucinations in Large Multimodal Models

1AIISC, 2Amazon

Wednesday February 26, 2025 08:30 - 10:15 am EST

About this tutorial

Large Language Models (LLMs) have made significant strides in generating human-like text, but their tendency to hallucinate—producing factually incorrect or fabricated information—remains a pressing issue. This tutorial provides a comprehensive exploration of hallucinations in LLMs, introducing participants to the key concepts and challenges in this domain. We will cover the types of hallucination, including Factual Mirage and Silver Lining, and present the latest approaches for benchmarking, detection, and mitigation. The motivation for understanding hallucination is particularly critical from a multimodal standpoint, as Vision-Language Models (VLMs) can exacerbate the problem by blending hallucinated text with misleading images or video. The tutorial will offer practical techniques to reduce hallucinations using both black-box and gray-box methods. Designed for researchers and professionals in generative AI, this tutorial bridges the gap between emerging research and practical solutions, providing attendees with valuable insights and tools to enhance the factual accuracy of LLM outputs. Participants will gain a deeper understanding of the complexities surrounding LLM hallucination and be equipped with strategies to drive future innovations in the field.

Schedule

Our tutorial will be held on February 26. Slides are available here.

Time Section Presenter
08:30—08:45 Section 1: Introduction to hallucination in LLMs Amitava
08:45—09:45 Section 2: Hallucination Detection/Mitigationn Aman/Vipula
09:45—10:00 Section 3: Open Challenges Amitava
10:00—10:15 Q & A Session

BibTeX

@article{ hallucination-llm-tutorial,
  author    = { Rawte, Vipula and Chadha, Aman and Sheth, Amit and	Das, Amitava },
  title     = { AAAI 2025 Tutorial: Hallucination in Large Multimodal Models },
  journal   = { AAAI 2025 },
  year      = { 2025 },
}