Amazon developed an AI recruiting tool that discriminated against women, with resumes containing words like "women's" being ranked lower than they should have been. The tool was killed within a year, but it highlights the issue of AI bias and its causes, such as representation bias, pre-existing bias, algorithmic processing bias, aggregation bias, and general skewing. AI bias can manifest in various ways, including stereotypes, misrepresentations, prejudices, and derogatory language. It can also have significant impacts on applications, such as chatbots, healthcare, and criminal justice, leading to biased outcomes that exacerbate societal biases. To mitigate AI bias, it's essential to pre-process training data, use evaluation datasets, evaluate models at scale, fine-tune models on unbiased responses, and defend against AI bias through techniques like prompt injection detection and AI bias assessments. By acknowledging the issue of AI bias and taking steps to address it, developers can create more fair and transparent AI systems that serve diverse user groups.