The adoption of generative AI tools, particularly large language models (LLMs), is rapidly increasing among enterprise engineering teams. Many early adopters are facing challenges such as evaluation, hallucinations, and abstraction issues, but those successfully deploying LLMs are adopting an agnostic approach to connect with major foundation models and tools, operationalizing scientific experiments through independent evaluations, and quantifying ROI and productivity gains by implementing systems for detecting performance issues and proactively addressing them.