The Hallucination Correction Model (HCM) is a post-editing tool designed to correct hallucinations generated by Large Language Models (LLMs) in open-book generation settings, such as summarization and Retrieval-Augmented Generation (RAG). The model receives reference documents and the original response from an LLM and generates a corrected response. HCM's performance was evaluated on several public benchmarks, including the HHEM leaderboard, FAVABENCH, NonFactS, and RAGTruth datasets. The results showed significant improvements in factuality rates across all datasets and leading LLMs. However, challenges were encountered with certain models, such as Falcon-7B-Instruct, which often generates information not directly supported by the provided documents. Future iterations aim to address these shortcomings and further improve the model's performance in reducing hallucinations in enterprise RAG pipelines.