The phenomenon of "hallucination" in Large Language Models (LLMs) refers to the generation of incorrect or fabricated text. This can occur due to various reasons such as a lack of capacity to memorize information, training data errors, and outdated training data. Hallucinations can significantly impact decision-making and reputation, especially in applications like court cases and chatbots. To detect and tackle hallucinations, researchers have been harnessing LLMs with distinct patterns: prompting, prompting with RAG, and LLM fine-tuning. Various metrics such as perplexity, uncertainty, factuality, context similarity, answer relevance, groundedness, DEP score, and others can be used to identify potential hallucinations. By leveraging these metrics and suggested strategies, users can systematically reduce instances of hallucinations in their AI outputs, ultimately improving the accuracy and reliability of LLM-powered applications.