/plushcap/analysis/together-ai/together-ai-fine-tuning-api-introducing-long-context-training-conversation-data-support-and-more-configuration-options

Fine-tuning API: Introducing long-context training, conversation data support and more configuration options

What's this blog post about?

The Fine-tuning API has introduced new features, including long-context training, conversation data support, and more configuration options. These updates aim to enhance the performance of specific tasks by allowing ML teams to customize open models easily. Longer-context fine-tuning supports up to 32K context length for Llama 3.1 8B and 70B fine-tuning and inference, while conversation and instruction data format support streamline data preparation. Training quality improvements have been made without any changes in hyperparameters, inputs, or cost of fine-tuning jobs. Validation dataset support allows users to monitor the loss of the model on unseen data during training. Quality-of-life enhancements include enhanced Weights & Biases integration and automated batch size setting.

Company
Together AI

Date published
Nov. 25, 2024

Author(s)
Max Ryabinin, Artem Chumachenko, George Grigorev, Arsh Zahed, Gleb Vazhenin

Word count
1726

Language
English

Hacker News points
None found.


By Matt Makai. 2021-2024.