Тёмный

Pinecone Workshop: LLM Size Doesn't Matter - Context Does 

Pinecone
Подписаться 11 тыс.
Просмотров 1,6 тыс.
50% 1

This discussion is critical for every AI, Engineering, Analytics, and Product leader interested in deploying AI solutions. Pinecone and Prolego share the surprising insights from their independent research studies on optimizing LLM RAG applications.
Although state-of-the-art LLMs like GPT-4 perform best on general benchmarks, small and open-source LLMs perform just as well when given the right context. These results are critical for: (1) overcoming policy constraints preventing you from sending data to model providers, (2) reducing the costs and increasing the ROI of your RAG, and (3) giving you more control over your models and infrastructure.
Pinecone study: www.pinecone.i...
Prolego study: www.prolego.co...
Presentation slides: docs.google.co...

Опубликовано:

 

26 сен 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 3   
@BTFranklin
@BTFranklin 5 месяцев назад
I was traveling when the stream happened, so I couldn't attend live. I very much appreciate that you posted this recording. Excellent information and analysis here.
@MatijaGrcic
@MatijaGrcic 5 месяцев назад
Great workshop!
@AlanDeikman
@AlanDeikman 5 месяцев назад
I ended up with the flu sorry I was in no shape. Thanks for the recording.
Далее
RAG Brag with Peter Werry from Unblocked
45:41
Kenji's Sushi Shop Showdown - Brawl Stars Animation
01:55
Это ваши Патрики ?
00:33
Просмотров 30 тыс.
The Future of Multi-Modal Search
1:02:29
Просмотров 642
The Home Server I've Been Wanting
18:14
Просмотров 13 тыс.
What is Pinecone?
8:16
Просмотров 1,1 тыс.
RAG Brag with John Wang of Assembled
47:49
Просмотров 104
Kenji's Sushi Shop Showdown - Brawl Stars Animation
01:55