In the fast-paced domain of Large Language Models (LLMs), the issue of hallucination is a prominent challenge. Despite continuous endeavors to address this concern, it remains a highly active area of research within the LLM landscape. Grasping the intricacies of this problem can be daunting, especially for those new to the field. This tutorial aims to bridge this knowledge gap by introducing the emerging realm of hallucination in LLMs. It will comprehensively explore the key aspects of hallucination, including benchmarking, detection, and mitigation techniques. Furthermore, we will delve into the specific constraints and shortcomings of current approaches, providing valuable insights to guide future research efforts for participants.
Our tutorial will be held on May 25. Slides are available here.
Time | Section | Presenter |
---|---|---|
09:00—09:45 | Section 1: Introduction | Vipula |
09:45—10:30 | Section 2: Hallucination Detection | Aman |
10:30—11:00 | Coffee break | |
11:00—11:45 | Section 3: Hallucination Mitigation | Vipula |
11:45—12:30 | Section 4: Open Challenges | Amitava |
12:30—13:00 | Q & A Session |
@article{ hallucination-llm-tutorial,
author = { Rawte, Vipula and Chadha, Aman and Sheth, Amit and Das, Amitava },
title = { LREC-COLING 2024 Tutorial: Hallucination in Large Language Models },
journal = { LREC-COLING 2024 },
year = { 2024 },
}