Data poisoning is a sophisticated adversarial attack that manipulates the information used in training artificial intelligence (AI) models, potentially hurting model performance, introducing biases, or creating security vulnerabilities. AI models rely on high-quality and integrity data to learn patterns and make predictions, and compromising this data can distort the model's outputs with dangerous consequences. There are two primary ways data poisoning occurs: direct and indirect attacks, where attackers deliberately inject harmful data into training datasets or exploit external data sources by manipulating web content or crowdsourced datasets that feed into AI models. Detecting data poisoning can be challenging, but there are warning signs such as a sudden drop in model accuracy, unexpected biases in outputs, or unusual misclassification rates. To effectively mitigate the risk of data poisoning, organizations should adopt a comprehensive approach that safeguards AI models at multiple levels, including implementing robust data validation, using trusted data sources, applying data sanitization techniques, monitoring model performance continuously, leveraging secure development tools, enforcing access control policies, and adopting differential privacy techniques. Maintaining data provenance tracking and committing to regular model retraining using clean vetted datasets are also crucial in defending against data poisoning attacks.