AI models are rapidly advancing across various domains, including cybersecurity and biology, with Claude, a frontier AI model, demonstrating significant improvements in cyber capabilities, but still lagging behind human experts. While the models have made substantial progress, they fall short of thresholds at which they generate substantially elevated risks to national security. The Frontier Red Team's work provides valuable insights into the trajectory of potential national security risks from frontier AI models and highlights the importance of internal safeguards, independent evaluation entities, and targeted external oversight to ensure responsible development of AI. The team is committed to scaling up their evaluations and risk mitigations, moving with urgency to address the rapidly advancing capabilities of AI models, and exploring deeper collaboration between frontier AI labs and governments to improve their work.