Protecting AI apps from bots and bad actors with Vercel and Kasada
The increasing popularity of AI applications has made them high-value targets for bots and bad actors aiming to exploit their computational resources. Vercel's industry-leading DDoS mitigation, including its recently launched Attack Challenge Mode, provides the first line of defense against these threats. However, additional protection is necessary to secure AI workloads from unauthorized use. Vercel and Kasada have partnered to lock down their AI SDK Playground, leveraging Next.js Middleware for enhanced security. By integrating Kasada's bot protection technology, they observed an immediate cessation of abusive activities on the platform. The implementation of a 1st-party-request-only protocol allowed them to intercept and block bot-driven API calls based on Kasada's bot classification. Kasada provides dynamic defense against automated threats with a platform that is quick to evolve, difficult to evade, and invisible to humans. The results of this collaboration have been impressive, with an immediate drop in bot traffic from 84% to negligible levels. As AI technologies continue to evolve, partnerships like these will be crucial for securing AI applications and ensuring a safe digital experience for users.
Company
Vercel
Date published
March 22, 2024
Author(s)
Malte Ubl
Word count
836
Hacker News points
None found.
Language
English