The Generative AI boom is underway, with large language models creating a global understanding of its potential. The platform vendor community is racing to deploy AI workloads, but challenges include defining use cases, executive sponsorship, and AI skills on deployment teams. A significant constraint is the scarcity of specialized compute resources for AI workloads. To address this, we can optimize existing resources or increase their availability. Anyscale is pushing compute efficiency boundaries, while Lambda is expanding access to specialized AI hardware. Anyscale has partnered with Nvidia and validated its integrations with Lambda's offerings, enabling rapid testing and deployment of large language models. This collaboration has provided the LLM developer community with a self-serve way to assess performance and accelerated delivery of new tooling.