We are committed to preventing misuse of our Claude models by adversarial actors while maintaining their utility for legitimate users. While our safety measures successfully prevent many harmful outputs, threat actors continue to explore methods to circumvent these protections. We have observed cases of malicious uses of our models, including influence-as-a-service operations, credential stuffing operations, recruitment fraud campaigns, and a novice actor using AI to enhance their technical capabilities for malware generation beyond their skill level. These activities pose significant risks to users and highlight the need for continuous innovation in our safety approaches and close collaboration with the broader security and safety community. Our key learnings include that users are starting to use frontier models to semi-autonomously orchestrate complex abuse systems, generative AI can accelerate capability development for less sophisticated actors, and we have identified and banned accounts associated with these malicious activities, which will help protect our users and prevent abuse or misuse of our services.