Тёмный

Fine-Tuning Mistral-7B with LoRA (Low Rank Adaptation) 

AI Makerspace
Подписаться 9 тыс.
Просмотров 4,4 тыс.
50% 1

GPT-4 Summary: Dive deep into the innovative world of fine-tuning language models with our comprehensive event, focusing on the groundbreaking Low-Rank Adaptation (LoRA) approach from Hugging Face's Parameter Efficient Fine-Tuning (PEFT) library. Discover how LoRA revolutionizes the industry by significantly reducing trainable parameters without sacrificing performance. Gain practical insights with a hands-on Python tutorial to adapt pre-trained LLMs for specific tasks. Whether you're a seasoned professional or just starting, this event will equip you with a deep understanding of efficient LLM fine-tuning. Join us live for an enlightening session on mastering PEFT and LoRA to transform your models!
Event page: lu.ma/llmswithlora
Have a question for a speaker? Drop them here:
app.sli.do/event/cbLiU8BM92Vi...
Speakers:
Dr. Greg Loughnane, Founder & CEO AI Makerspace.
/ greglough. .
Chris Alexiuk, CTO AI Makerspace.
/ csalexiuk
Join our community to start building, shipping, and sharing with us today!
/ discord
Apply for the LLM Ops Cohort on Maven today!
maven.com/aimakerspace/llmops
How'd we do? Share your feedback and suggestions for future events.
forms.gle/U8oeCWxiWLLg6g678

Наука

Опубликовано:

 

2 янв 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 11   
@AI-Makerspace
@AI-Makerspace 5 месяцев назад
Google Colab notebook: colab.research.google.com/drive/1d0JH7heSuEgVVWv5T4xE3dwS-BihuMGY?usp=sharing Slides: www.canva.com/design/DAF41q231ZI/IasOGgw4g5D1JSa0h-_AXA/edit?DAF41q231ZI&
@csmac3144a
@csmac3144a 5 месяцев назад
Greg you have dialed in your delivery *perfectly* now for the CTO-ish crowd. Extremely impressive. Pacing is perfect. Required background to get something out of this is perfect: fat part of the curve for IT management / C-level. Great stuff. Keep it up!
@AI-Makerspace
@AI-Makerspace 5 месяцев назад
Awesome shoutout @csmac3144a! I'll do my best to follow-up with the encore performance as we go deeper next week! RSVP link for our Quantization event coming later today!
@MannyBernabe
@MannyBernabe 4 месяца назад
Excellent. Thanks!
@hemanthkumar-tj4hs
@hemanthkumar-tj4hs 5 месяцев назад
Great Video Thank you
@hugoediaz
@hugoediaz 5 месяцев назад
Thanks!
@AI-Makerspace
@AI-Makerspace 5 месяцев назад
Thank you @hugoediaz!!
@DonBranson1
@DonBranson1 2 месяца назад
great video and much appreciated notebook. In the TrainingArguments section of the notebook is it possible to set a parameter to stop training after a certain loss metric is reached? (to avoid over fitting, etc.)
@AI-Makerspace
@AI-Makerspace 2 месяца назад
Yes! You can use callbacks to achieve this functionality! huggingface.co/docs/transformers/en/main_classes/callback#transformers.EarlyStoppingCallback
@yamansrivastava1729
@yamansrivastava1729 5 месяцев назад
I have a very specific question. Can you do a video on fine tuning a quantized mixtral 8x7b instruct v0.1 gguf, Q4_k_m on locally created environment on my own device? Transformers library doesn't work on gguf files. How can we do it
@AI-Makerspace
@AI-Makerspace 5 месяцев назад
This is indeed a very specific question! We'll be looking to expand more on quantization strategies in the coming months - which will include some information on how to work with gguf!
Далее
Fine-tuning with QLoRA (Quantized Low-Rank Adaptation)
1:01:51
DSPy: Advanced Prompt Engineering?
1:01:23
Просмотров 2,2 тыс.
Never waste PASTA SAUCE @itsQCP
00:19
Просмотров 3,3 млн
АСЛАН, АВИ, АНЯ
00:12
Просмотров 1,3 млн
Low-Rank Adaptation - LoRA explained
10:42
Просмотров 8 тыс.
Long Context Windows: Extending Llama 3
1:12:32
I wish every AI Engineer could watch this.
33:49
Просмотров 57 тыс.