Abstract
As Large Language Models (LLMs) become integral to Natural Language Processing (NLP) applications, their tendency to generate hallucinations — outputs misaligned with input data or real-world facts — presents a significant challenge. This paper introduces a novel approach leveraging Agent AI within the LangGraph system to systematically detect and mitigate LLM hallucinations. The framework categorizes hallucinations into three distinct types: Hallucination Knowledge Positive (HK+), Hallucination Knowledge Negative (HK-), and normal responses. HK+ represents errors where the model has relevant knowledge but provides incorrect answers, while HK- denotes hallucinations arising from insufficient knowledge. Normal responses are accurate outputs requiring no intervention. The proposed method employs corrective Retrieval-Augmented Generation (RAG) to address HK- cases by supplementing missing knowledge. For HK+, a human-in-the-loop mechanism is activated to rectify errors and ensure accuracy. Normal responses proceed without additional processing. This structured approach improves the precision and reliability of LLMs across diverse application scenarios. By integrating Agent AI for dynamic classification and targeted intervention, the LangGraph system significantly reduces hallucination rates, offering a robust solution for enhancing LLM performance in real-world deployments. This research provides new perspectives and methodologies for advancing LLM stability and dependability.