There are significant financial losses due to global fraud, with the economy losing over $5 trillion annually. Building and deploying sophisticated machine learning (ML) models is crucial in detecting and preventing fraud, but these models can be fragile and require monitoring for anomalies. ML practitioners face challenges such as imbalanced datasets, misleading traditional evaluation metrics, limited sensitive features, and not all inferences weighted equally. To address these issues, important metrics to watch include recall, false negative rate, and false positive rate. Identifying the slices driving performance degradation is critical, and having an ML observability platform can help surface feature performance heatmaps to patch costly model exploits quickly. Additionally, monitoring and troubleshooting drift or distribution changes over time is essential in fraud models, as tactics are always evolving, and it's crucial to account for drift to ensure models stay relevant. By being proactive with monitoring and measuring drift, counter-abuse ML teams can get ahead of potential problems and focus energy on the most sophisticated threats.