The use of large language models (LLMs) has raised concerns about liability for hallucinations, with a recent court case involving Air Canada highlighting the importance of LLM evaluation and observability. Researchers have identified common issues in RAG systems, such as mis-ranked documents and extraction failures, and lessons learned from these problems. To get real value out of LLMs, AI teams need to fine-tune models on their own data, with various resources available for guidance. The development of synthetic data is also becoming increasingly viable for pretraining and tuning, offering a cheaper alternative to human annotation. Meanwhile, the hype surrounding AGI and superintelligence should not overshadow the current drive towards "capable" AI, which deserves more attention and respect.