Тёмный

Back to Basics for RAG w/ Jo Bergum 

Подписаться
Просмотров 1,8 тыс.
% 64

Adding context - sensitive information to LLM prompts through retrieval is a popular technique to boost accuracy. This talk will cover the fundamentals of information retrieval (IR) and the failure modes of vector embeddings for retrieval and provide practical solutions to avoid them. Jo demonstrates how to set up simple but effective IR evaluations for your data, allowing faster exploration and systematic approaches to improving retrieval accuracy.
This is a talk from Mastering LLMs: A survey course on applied topics for Large Language Models.
More resources are available here:
bit.ly/applied-llms
Slides and transcript: parlance-labs.com/education/rag/jo.html
00:00: Introduction and Background
01:19: RAG and Labeling with Retrieval
03:31: Evaluating Information Retrieval Systems
05:54: Evaluating Document Relevance
08:22: Metrics for Retrieval System Performance
10:11: Reciprocal Rank and Industry Metrics
12:41: Using Large Language Models for Judging Relevance
14:32: Microsoft’s Research on LLMs for Evaluation
17:04: Representational Approaches for Efficient Retrieval
19:14: Sparse and Dense Representations
22:27: Importance of Chunking for High Precision Search
25:55: Comparison of Retrieval Models
27:53: Real World Retrieval: Beyond Text Similarity
29:10: Summary and Key Takeaways
31:07: Resources and Closing Remarks

Хобби

Опубликовано:

 

1 июл 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 1   
@shameekm2146
@shameekm2146 18 дней назад
Thank you for an informative session on RAG.