The tech industry is racing to integrate AI into their systems and product offerings, but this has led to a growing concern about the risks associated with running AI systems. The Open Worldwide Application Security Project (OWASP) has released its first Top 10 for Large Language Model Applications, highlighting security vulnerabilities such as prompt injection, insecure output handling, model denial of service, insecure plugin design, excessive agency, sensitive information disclosure, training data poisoning, overreliance, model theft, and supply chain vulnerabilities. These risks can be exploited by threat actors to execute malicious instructions, reveal confidential data, disrupt AI programs, and cause financial losses for companies. To help organizations prioritize these vulnerabilities, the Bugcrowd VRT is an open-source taxonomy that aligns customers and hackers on a common set of risk priority ratings. The OWASP Top 10 aims to educate developers, designers, architects, managers, and organizations about potential security risks in AI systems.