Тёмный

Community Paper Reading: RAFT - Adapting Language Model to Domain Specific RAG. 

Arize AI
Подписаться 4,2 тыс.
Просмотров 340
50% 1

We’re excited to host Sai Kolasani, researcher at UC Berkeley’s RISE Lab, to talk about his work on RAFT: Adapting Language Model to Domain Specific RAG. RAFT is a training recipe that improves an LLM’s ability to answer questions in a “open-book” in-domain settings. Given a question, and a set of retrieved documents, the model is trained to ignore documents that don’t help in answering the question (aka distractor documents). This coupled with RAFT’s chain-of-thought-style response, helps improve the model’s ability to reason. In domain-specific RAG, RAFT consistently improves the model’s performance across PubMed, HotpotQA, and Gorilla datasets, presenting a post-training recipe to improve pre-trained LLMs to in-domain RAG.

Наука

Опубликовано:

 

25 июн 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии    
Далее
Has Generative AI Already Peaked? - Computerphile
12:48
Qalpoq - Amakivachcha (hajviy ko'rsatuv)
41:44
Просмотров 176 тыс.
OpenAI’s New ChatGPT: 7 Incredible Capabilities!
6:27
RAFT: Adapting Language Model to Domain Specific RAG
11:41
What is RAG? (Retrieval Augmented Generation)
11:37
Просмотров 146 тыс.
Should You Use Open Source Large Language Models?
6:40
UC Berkeley Data Science Startups (DataEDGE 2014)
42:43
3x 2x 1x 0.5x 0.3x... #iphone
0:10
Просмотров 2,5 млн
Скучнее iPhone еще не было!
10:48
Просмотров 590 тыс.
Wi-fi с бесконечным паролем 😱
0:18
Распаковка 16 iPhone pro max
0:50
Просмотров 221 тыс.
CED: часть 1
23:37
Просмотров 49 тыс.
iPhone 15 Pro Max vs Pixel 9 🚀
0:18
Просмотров 2,4 млн