LaunchDarkly's new AI Configs now support bringing your own model, unlocking more flexibility for supporting fine-tuned models and running models on local hardware. Ollama is an open-source tool used to run large language models locally. A tutorial demonstrates how to connect LaunchDarkly with Ollama using Python, creating a custom model AI config that tracks metrics such as latency, token usage, and generation count. The tutorial showcases the capabilities of reasoning models and provides guidance on tracking metrics, advanced targeting capabilities, and further reading resources for runtime model management.