Join us as we unravel the essential role of cross-entropy loss in training and fine-tuning Large Language Models. Discover how this foundational loss function optimizes predictions, from standard methods like Low-Rank Adaptation (LoRA) to advanced techniques such as Direct Preference Optimization (DPO). Learn how cross-entropy loss helps make LLMs more effective for specific tasks and improves their performance. Don't miss this insightful session-subscribe and watch now!
Event page: bit.ly/logitsloss
Have a question for a speaker? Drop them here:
app.sli.do/eve...
Speakers:
Dr. Greg, Co-Founder & CEO
/ gregloughane
The Wiz, Co-Founder & CTO
/ csalexiuk
Apply for our new AI Engineering Bootcamp on Maven today!
bit.ly/aie1
For team leaders, check out!
aimakerspace.i...
Join our community to start building, shipping, and sharing with us today!
/ discord
How'd we do? Share your feedback and suggestions for future events.
forms.gle/g4My...
5 окт 2024