/plushcap/analysis/deepgram/rag-vs-finetuning

RAG vs. Finetuning: Enhancing LLMs with new knowledge

What's this blog post about?

The article discusses two approaches for enhancing Large Language Models (LLMs) with new knowledge: fine-tuning and Retrieval Augmented Generation (RAG). Fine-tuning involves training an already trained LLM on additional data, allowing it to specialize in specific domains or tasks. RAG is a fusion of Information Retrieval concepts with LLMs, enabling LMs to access external documents instead of relying solely on their internal knowledge. The article highlights the advantages and disadvantages of both approaches and emphasizes that they are not mutually exclusive. Researchers are still working on finding the right blend of these techniques for different use cases, as imparting LLMs with new knowledge remains a challenging task.

Company
Deepgram

Date published
Oct. 31, 2023

Author(s)
Brad Nikkel

Word count
2102

Language
English

Hacker News points
None found.


By Matt Makai. 2021-2024.