The TestGen-LLM tool, introduced by Meta researchers, uses large language models to automatically improve test coverage with guaranteed assurances for improvement over the existing code base. The fully automated approach was implemented in an open-source project called Cover-Agent4.1K, which received a 73% acceptance rate from human reviewers. However, challenges were encountered when implementing and reviewing TestGen-LLM, including issues with language formatting, test requirements, and repeated suggestions of failed tests. To address these challenges, additional context was provided to the LLM through user inputs and instructions, enabling higher quality tests and a higher passing rate. The tool's limitations were also highlighted, but its potential for automatically generating test candidates and increasing code coverage in a fraction of the time is promising, with plans to continue developing and integrating cutting-edge methods into Cover-Agent.