We believe that the AI sector needs effective third-party testing for frontier AI systems to avoid societal harm. Developing a testing regime and associated policy interventions based on industry, government, and academia insights is crucial. A robust third-party testing regime can help identify and prevent potential risks of AI systems, providing a means for countries and groups to coordinate through shared standards and Mutual Recognition agreements. We need an effective testing regime to complement sector-specific regulation and develop general policy approaches. Effective testing will give people and institutions more trust in AI systems, be precisely scoped, and apply only to the most computationally-intensive large-scale systems. A diverse ecosystem of organizations will carry out testing, including private companies, universities, and governments. We expect that third-party testing will be accomplished by a diverse ecosystem of different organizations, similar to how product safety is achieved in other parts of the economy today.