The guide explores how to achieve "observability as code" for AI applications using Pulumi and New Relic. It starts by introducing the challenges of monitoring complex AI applications and then presents a solution that combines Pulumi's infrastructure-as-code platform with New Relic's intelligent observability platform. The approach enables teams to define AI and large language model (LLM) monitoring instrumentation along with cloud resources programmatically, secure API keys and cloud account credentials, and automatically deploy New Relic instrumentation alongside AI applications and infrastructure. Benefits include consistent monitoring across environments, version-controlled observability configuration, easier detection of performance issues, deeper insights into AI model behavior and resource usage. The guide also introduces Pulumi's products and services, including Pulumi Cloud, Pulumi Environments, Secrets, and Configuration (ESC), and demonstrates how to use Pulumi Copilot to generate Python code for infrastructure definitions. It provides a step-by-step process for deploying the chat application to AWS using Pulumi, configuring New Relic agents with AI, managing secrets with Pulumi ESC, generating infrastructure code with Pulumi Copilot, and deploying the application with Pulumi. The guide concludes by exploring New Relic's AI LLM dashboards, including AI Response metrics, AI Model comparison, and OpenAI custom dashboards.