This tutorial demonstrates how to set up a SQL router query engine for effective text-to-SQL using Large Language Models (LLMs) with in-context learning. It builds on top of LlamaIndex, a table of cameras, and a vector index built from a Wikipedia article to make routing decisions between SQL retriever and embeddings. The tutorial covers how to install dependencies, launch Phoenix, enable tracing within LlamaIndex, configure an OpenAI API key, prepare reference data, build the LlamaIndex application, and make queries using the router query engine. It highlights the importance of LLM tracing and observability in finding failure points and acting on them quickly. The implementation can lead to inconsistent results due to the influence of the SQL tool description on the router's choice of tool, emphasizing the need for careful tuning and monitoring.