/plushcap/analysis/deepgram/information-retrieval-experiment

Information Retrieval: Which LLM is best at looking things up?

What's this blog post about?

Researchers at Stanford tested four language models (BERT, BART, RoBERTa, GPT-2, and XLNet) to determine which is best at retrieving information. The models were evaluated on two tasks: Knowledge-Seeking Turn Detection and Knowledge Selection. In the first test, a finetuned version of BERT achieved an accuracy rate of 99.1%. In the second test, RoBERTa performed the best with scores of MRR@5=0.874, R@1=0.763, and R@5=0.929. The results suggest that RoBERTa is highly skilled at retrieving information for users, making it a good choice for building AI assistants focused on information retrieval and knowledge-grounded generation.

Company
Deepgram

Date published
Oct. 26, 2023

Author(s)
Jose Nicholas Francisco

Word count
1229

Language
English

Hacker News points
None found.


By Matt Makai. 2021-2024.