The text discusses the development of a financial research agent using AI agents, focusing on the practical aspects. The agent is designed to break down complex questions into manageable steps, analyze results, and adjust its strategy based on new information. The workflow consists of three main functions: plan_step, execute_step, and replan_step, which work together to create a smooth cycle of research. The text also covers setting up the agent with dependencies, creating a plan template, and using a graph to visualize the workflow. Additionally, it discusses evaluating the agent's performance using Galileo evaluation callback and LLM judge. The results show that the agent performed well in terms of context adherence and speed, but had some issues with backing up older numbers with proper sources. The article concludes by highlighting the importance of monitoring and feedback for improving AI agents and invites readers to learn more about its state-of-the-art evaluation capabilities.