As AI technology continues to be rapidly commercialized, new potential security vulnerabilities are emerging, making it essential for organizations to test their Large Language Model (LLM) applications and other AI systems for common security vulnerabilities. To address this need, Bugcrowd has launched AI Penetration Testing, a service that helps uncover the most common application security flaws in LLMs and other AI applications using a testing methodology based on the OWASP Top 10. Regular pentesting of AI applications is crucial to maintain trust and protect user data, especially considering the sensitive information handled by these systems. The service includes vetted pentesters with relevant skills, 24/7 visibility into timelines and findings, and a detailed final report, allowing organizations to stay ahead in the field of AI security which is still in its early stages and new vulnerabilities are likely to emerge.