Тёмный

Augmented Language Models (LLM Bootcamp) 

The Full Stack
Подписаться 32 тыс.
Просмотров 37 тыс.
50% 1

Опубликовано:

 

2 окт 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 17   
@RohanKumar-vx5sb
@RohanKumar-vx5sb Год назад
u’re the best. this has been the singular most useful and up to date analysis of LLM advancements
@lukeliem9216
@lukeliem9216 Год назад
This talk is very informative about building LLM-based apps with proprietary datasets.
@fudanjx
@fudanjx Год назад
Quick Summary: Introduction: Language models are powerful but lack knowledge of the world. We can augment them by providing relevant context and data. Witnesses: - Retrieval: Searching a corpus and providing relevant documents as context. - Chains: Using one language model to develop context for another. - Tools: Giving models access to APIs and external data. Testimonies: Retrieval: - Simplest way is adding relevant facts to context window. - As corpus scales, treat it as an information retrieval problem. - Embeddings and vector databases can improve retrieval. Chains: - Use one language model to develop context for another. - Can help encode complex reasoning and get around token limits. - Tools like Langchain provide examples of chain patterns. Tools: - Give models access to APIs and external data. - Chains involve manually designing tool use. - Plugins let models decide when to use tools. Key Takeaways: - Start with rules and heuristics to provide context. - As knowledge base scales, think about information retrieval. - Chains can help with complex reasoning and token limits. - Tools give models access to external knowledge. Conclusion: Augmenting language models with relevant context and data can significantly improve their capabilities. There are a variety of techniques to provide that augmentation, each with trade-offs around flexibility, reliability, and complexity.
@ayanghosh8226
@ayanghosh8226 Год назад
Love the logical sequence of presenting the complications in advanced LLM applications. One of the best resources on the web, if one wants a solid mental map of how and when to augment LLMs.
@HarendraSingh-xw6hv
@HarendraSingh-xw6hv Год назад
💯
@loic7572
@loic7572 Год назад
This is the best bootcamp I've ever watched. I only wish I had known about the RU-vid channel before.
@l501l501l
@l501l501l Год назад
Second that!
@צחייעקובוביץ
@צחייעקובוביץ Год назад
great. I had to listen at a 0,75 speed, not to miss anything.
Год назад
Awesome! Thanks Josh for the presentation!
@deeplearningpartnership
@deeplearningpartnership Год назад
Cool
@saratbhargavachinni5544
@saratbhargavachinni5544 Год назад
Great talk! Thanks for sharing
@jeromeeusebius
@jeromeeusebius Год назад
Great resource for understanding RAG and the various ways to improve reliability and accuracy of LLM's. Thanks for sharing.
@robertcormia7970
@robertcormia7970 Год назад
Another fantastic video (webinar) helping to build on foundational knowledge of LLMs. Clear explainations of chains, tools, APIs, and "process". Can't wait to watch the next one (LLMOPs)
@za_daleko
@za_daleko Год назад
Thanx for this knowledge. Greetings from Poland.
@domlahaix
@domlahaix Год назад
Crocodile, Ball.... unless you're working for Lacoste 😀
@kennethcarvalho3684
@kennethcarvalho3684 11 месяцев назад
isnt this a search like google
@SavanVyas91
@SavanVyas91 Год назад
What’s his name where can I find him?
Далее
Learn to Spell: Prompt Engineering (LLM Bootcamp)
51:32
UX for Language User Interfaces (LLM Bootcamp)
50:48
Просмотров 15 тыс.
Airpod Through Glass Trick! 😱 #shorts
00:19
Просмотров 2,3 млн
LLMOps (LLM Bootcamp)
49:11
Просмотров 91 тыс.
Project Walkthrough: askFSDL (LLM Bootcamp)
42:06
Просмотров 7 тыс.
[1hr Talk] Intro to Large Language Models
59:48
Просмотров 2,2 млн
Reliable, fully local RAG agents with LLaMA3.2-3b
31:04
Fine-tune Multi-modal LLaVA Vision and Language Models
51:06