Тёмный
No video :(

Fine-Tuning Llama 3 on a Custom Dataset: Training LLM for a RAG Q&A Use Case on a Single GPU 

Venelin Valkov
Подписаться 26 тыс.
Просмотров 12 тыс.
50% 1

Опубликовано:

 

27 авг 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 15   
@venelin_valkov
@venelin_valkov Месяц назад
Full-text tutorial (requires MLExpert Pro): www.mlexpert.io/bootcamp/fine-tuning-llama-3-llm-for-rag What performance did you get with your fine-tuned model?
@karthikb.s.k.4486
@karthikb.s.k.4486 Месяц назад
How to buy monthly subscription please let me know any link for it? As the link says yearly need the link for monthly
@TayyabAhmad007
@TayyabAhmad007 Месяц назад
I trained it with 2 epochs and the result was amazing! Nice explanation btw!!
@francycharuto
@francycharuto Месяц назад
Could you share your use case?
@fardinahmadpor1225
@fardinahmadpor1225 Месяц назад
Thank you in have watch several videos on llama fine tuning With lots of differences you are the best ! Especially anout dataset and how you formatted it
@MecchaKakkoi
@MecchaKakkoi Месяц назад
Great stuff as usual. Very useful info!
@MrQuicast
@MrQuicast 15 дней назад
I’m trying to fine-tune the LLaMA 3.1 8B model without quantization, but when I try to use the pipeline with the unquantized model, I encounter this error: Trying to set a tensor of shape torch.Size([128256, 4096]) in 'weight' (which has shape torch.Size([128264, 4096])), this looks incorrect. Do you know why this is happening? maybe i'm using the wrong pad token idk. Thanks in advance
@bassamry
@bassamry 11 дней назад
can you share the notebook?
@allaalzoy2010a
@allaalzoy2010a 19 дней назад
If we apply the same approach to a dataset with another language like France or Arabic, will the approach change? Assume same columns structure, and names like you showed in the video.
@karthikeyakollu6622
@karthikeyakollu6622 Месяц назад
Im looking for this ❣️
@mohammedaitkheri6200
@mohammedaitkheri6200 25 дней назад
can we follow the same steps to finetuning another model LMM like Mistral AI ?
@cchristoff
@cchristoff Месяц назад
Дали base model-a ще може да се тренира с двойки въпрос-отговор на Български?
@TheRottweiler_Gemii
@TheRottweiler_Gemii Месяц назад
Can we fine tune to 2bit model ?
@DataFinSightAI-e1c
@DataFinSightAI-e1c 15 дней назад
Amazing video
Далее
AI isn't gonna keep improving
22:11
Просмотров 161 тыс.
Optimize Your AI Models
11:43
Просмотров 7 тыс.
I Analyzed My Finance With Local LLMs
17:51
Просмотров 468 тыс.
Graph RAG: Improving RAG with Knowledge Graphs
15:58
Просмотров 52 тыс.