Fine-tuning is the process of updating model weights to control its behavior, offering deeper and more nuanced control compared to prompting. However, it requires careful consideration as it can lead to "catastrophic forgetting" if not done correctly. Fine-tuning addresses weaknesses in modern frontier AI models, including unreliable adherence to instructions, high costs for operations at scale, and latency issues. By fine-tuning, one can achieve a good dataset, smaller model sizes leading to lower inference costs, and lower latency, making it an attractive option for improving the quality and reliability of GenAI-powered features while reducing costs. Fine-tuning is not a panacea with no tradeoffs, as it requires preparing data, training models, evaluating performance, and deploying the fine-tuned model, which can be achieved by reasonably competent software engineers without specific training in data science or machine learning.