Тёмный

Logits and Loss: Training and Fine-Tuning LLMs 

AI Makerspace
Подписаться 10 тыс.
Просмотров 852
50% 1

Join us as we unravel the essential role of cross-entropy loss in training and fine-tuning Large Language Models. Discover how this foundational loss function optimizes predictions, from standard methods like Low-Rank Adaptation (LoRA) to advanced techniques such as Direct Preference Optimization (DPO). Learn how cross-entropy loss helps make LLMs more effective for specific tasks and improves their performance. Don't miss this insightful session-subscribe and watch now!
Event page: bit.ly/logitsloss
Have a question for a speaker? Drop them here:
app.sli.do/eve...
Speakers:
Dr. Greg, Co-Founder & CEO
/ gregloughane
The Wiz, Co-Founder & CTO
/ csalexiuk
Apply for our new AI Engineering Bootcamp on Maven today!
bit.ly/aie1
For team leaders, check out!
aimakerspace.i...
Join our community to start building, shipping, and sharing with us today!
/ discord
How'd we do? Share your feedback and suggestions for future events.
forms.gle/g4My...

Опубликовано:

 

5 окт 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 1   
@AI-Makerspace
@AI-Makerspace 3 месяца назад
The Loss Function in LLMs - Cross Entropy: colab.research.google.com/drive/1VEk0mdGYKfPezWDJ4ErajNNlr4pq2Duk?usp=sharing Event Slides: www.canva.com/design/DAGH7_m5E48/HlpvFnc2VbCsjNiiFGdYnQ/view?DAGH7_m5E48&
Далее
The Next Token: How LLMs Predict
1:02:37
Просмотров 754
My TOP TEN TIPS for Fine-tuning
24:58
Просмотров 3 тыс.
How AI 'Understands' Images (CLIP) - Computerphile
18:05
[Webinar] LLMs for Evaluating LLMs
49:07
Просмотров 10 тыс.
The Midpoint Circle Algorithm Explained Step by Step
13:33
I love small and awesome models
11:43
Просмотров 22 тыс.
Has Generative AI Already Peaked? - Computerphile
12:48