/plushcap/analysis/langchain/langchain-test-run-comparisons

Test Run Comparisons

What's this blog post about?

LangChain has introduced Test Run Comparisons to assist developers in building and evaluating Language Model (LLM) applications. The new feature allows users to compare multiple test runs side-by-side, making it easier to gain insights from the data. Users can also use LLM-assisted evaluation or other methods for scoring tests and manually explore datapoints for further analysis. This tool aims to help developers better understand how their LLMs are performing on specific tasks by comparing different test runs on a dataset. LangSmith is currently in private beta, with more access and features expected to be added over the coming weeks.

Company
LangChain

Date published
Oct. 17, 2023

Author(s)
LangChain

Word count
590

Language
English

Hacker News points
None found.


By Matt Makai. 2021-2024.