Large language models (LLMs) have gained popularity due to their advantages for organizations building generative AI applications. However, LLMs can produce incorrect responses that are presented confidently, a phenomenon known as hallucination. This can occur due to various factors such as insufficient training data, inadequate supervision, model overfitting, and knowledge cutoff dates. Hallucinations can manifest in three main types: factual inaccuracies, generated quotations or sources, and logical inconsistencies. To mitigate these hallucinations, researchers are developing strategies such as prompt engineering, retrieval augmented generation (RAG), and post-generation verification techniques. These approaches aim to enhance the reliability of AI systems, improve output quality, and ensure alignment with human values and factual accuracy. By leveraging targeted mitigation tools and strategies like RAG, organizations can significantly reduce instances of hallucinations, thereby maximizing the benefits of generative AI applications.