The use of AI-powered coding assistants has transformed software development by providing developers with innovative ideas, intelligent suggestions, and automation of repetitive tasks. The evaluation process of these tools is complex due to the lack of specific criteria or metrics to measure productivity, which varies across individuals, teams, and projects. An experiment conducted by Scalefocus with three agile teams using GitHub Copilot tracked key metrics such as tasks, lines of code, hours of development, and time working on unit tests. The data highlighted Copilot's impact on team productivity, accelerating software development, reducing development and code review time, and increasing productivity by generating repetitive code blocks and suggesting best practices. To measure developer productivity with AI tools like Copilot, various methods can be used, including lines of code metrics, source tracking, surveys, and qualitative assessments, such as evaluating code quality using industry standards like KISS and DRY principles. The SPACE framework provides a structured approach to measuring developer productivity by focusing on five components: satisfaction, performance, activity, communication and collaboration, and efficiency and flow. Ultimately, careful planning and proper training are essential for the effective integration of AI coding assistants in software development teams.