The Fine-tuning API has introduced new features, including long-context training, conversation data support, and more configuration options. These updates aim to enhance the performance of specific tasks by allowing ML teams to customize open models easily. Longer-context fine-tuning supports up to 32K context length for Llama 3.1 8B and 70B fine-tuning and inference, while conversation and instruction data format support streamline data preparation. Training quality improvements have been made without any changes in hyperparameters, inputs, or cost of fine-tuning jobs. Validation dataset support allows users to monitor the loss of the model on unseen data during training. Quality-of-life enhancements include enhanced Weights & Biases integration and automated batch size setting.