/plushcap/analysis/langchain/langchain-using-langsmith-to-support-fine-tuning-of-open-source-llms

Using LangSmith to Support Fine-tuning

What's this blog post about?

The text discusses the process of fine-tuning and evaluating large language models (LLMs) using LangSmith for dataset management and evaluation. It covers both open source LLMs on CoLab and HuggingFace, as well as OpenAI's new finetuning service. The guide demonstrates fine-tuning LLaMA2-7b-chat and gpt-3.5-turbo for an extraction task using training data exported from LangSmith. It also provides insights on when to fine-tune, how to do it efficiently, and the evaluation process. The results show that fine-tuning small open source models on well-defined tasks can outperform much larger generalist models.

Company
LangChain

Date published
Aug. 23, 2023

Author(s)
By LangChain

Word count
2018

Language
English

Hacker News points
None found.


By Matt Makai. 2021-2024.