Тёмный

Generative AI Fine Tuning LLM Models Crash Course 

Krish Naik
Подписаться 975 тыс.
Просмотров 29 тыс.
50% 1

This video is a crash course on understanding how finetuning on LLM models can be performed uing QLORA,LORA, Quantization using LLama2, Gradient and Google Gemma model. This crash course includes both theoretical intuition and practical intuition on making you understand how we can perform finetuning.
Timestamp:
00:00:00 Introduction
00:01:20 Quantization Intuition
00:33:44 Lora And QLORA Indepth Intuition
00:56:07 Finetuning With LLama2
01:20:16 1 bit LLM Indepth Intuition
01:37:14 Finetuning with Google Gemma Models
01:59:26 Building LLm Pipelines With No code
02:20:14 Fine tuning With Own Cutom Data
Code Github: github.com/krishnaik06/Finetu...
-------------------------------------------------------------------------------------------------
Support me by joining membership so that I can upload these kind of videos
/ @krishnaik06
-----------------------------------------------------------------------------------
►Generative AI On AWS: • Starting Generative AI...
►Fresh Langchain Playlist: • Fresh And Updated Lang...
►LLM Fine Tuning Playlist: • Steps By Step Tutorial...
►AWS Bedrock Playlist: • Generative AI In AWS-A...
►Llamindex Playlist: • Announcing LlamaIndex ...
►Google Gemini Playlist: • Google Is On Another L...
►Langchain Playlist: • Amazing Langchain Seri...
►Data Science Projects:
• Now you Can Crack Any ...
►Learn In One Tutorials
Statistics in 6 hours: • Complete Statistics Fo...
End To End RAG LLM APP Using LlamaIndex And OpenAI- Indexing And Querying Multiple Pdf's
Machine Learning In 6 Hours: • Complete Machine Learn...
Deep Learning 5 hours : • Deep Learning Indepth ...
►Learn In a Week Playlist
Statistics: • Live Day 1- Introducti...
Machine Learning : • Announcing 7 Days Live...
Deep Learning: • 5 Days Live Deep Learn...
NLP : • Announcing NLP Live co...
---------------------------------------------------------------------------------------------------
My Recording Gear
Laptop: amzn.to/4886inY
Office Desk : amzn.to/48nAWcO
Camera: amzn.to/3vcEIHS
Writing Pad:amzn.to/3OuXq41
Monitor: amzn.to/3vcEIHS
Audio Accessories: amzn.to/48nbgxD
Audio Mic: amzn.to/48nbgxD

Опубликовано:

 

26 июн 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 53   
@yogeshmagar452
@yogeshmagar452 Месяц назад
Krish Naik respect Button❤
@BabaAndBaby11
@BabaAndBaby11 Месяц назад
Thank you very much Krish for uploading this.
@prekshamishra9750
@prekshamishra9750 Месяц назад
Krish...yet again!! I was just looking for your finetuning video here and you uploaded this..I cant thank you enough..really 👍😀
@sohampatil8681
@sohampatil8681 Месяц назад
Can we connect brother. I am new into generative AI and wanted to know the basics .
@senthilkumarradhakrishnan744
@senthilkumarradhakrishnan744 Месяц назад
just getting your video at the right time !! Cudos brother
@Mohama589
@Mohama589 Месяц назад
full respect bro , from morocco MA.
@deekshitht786
@deekshitht786 8 часов назад
You are awesome ❤
@lalaniwerake881
@lalaniwerake881 28 дней назад
Amazing content, big fan of you :) Much love from Hawaii
@anuradhabalasubramanian9845
@anuradhabalasubramanian9845 12 дней назад
Awesome presentation Krish !!!! You are a superstar!!!
@rutujasurve4172
@rutujasurve4172 Месяц назад
Brilliant brilliant 🙌
@tejasahirrao1103
@tejasahirrao1103 Месяц назад
Thank you krish
@shalabhchaturvedi6290
@shalabhchaturvedi6290 Месяц назад
Big salute!
@abhishekvarma4449
@abhishekvarma4449 28 дней назад
Thanks Krish it's very helpful
@shakilkhan4306
@shakilkhan4306 Месяц назад
Thanks man!
@souvikchandra
@souvikchandra 25 дней назад
Thank you so much for such an comprehensive tutorial. Really love your teaching style. Could you also refer some books on LLM fine tuning.
@virkumardevtale9671
@virkumardevtale9671 Месяц назад
Thanks you very much sir🎉🎉🎉
@AshokKumar-mg1wx
@AshokKumar-mg1wx Месяц назад
Krish bro ❤
@foysalmamun5106
@foysalmamun5106 Месяц назад
Hi @krishnaik06, Thank you again for anther Crash Course. may I know which tools/software are you using for presentation?
@sadiazaman7903
@sadiazaman7903 Месяц назад
Thank you for an amazing course as always. Can we please get these notes as well. they are really good for quick revision.
@tejasahirrao1103
@tejasahirrao1103 Месяц назад
Please make a complete playlist to secure a job in the field of Ai
@Jeganbaskaran
@Jeganbaskaran Месяц назад
Krish, most of the fine tuning done by the existing dataset from HF. however converting the dataset as per the format its a challenging for any domain dataset. How we can train our own data to finetune the model so that accuracy ll be even better. Any thoughts?
@EkNidhi
@EkNidhi Месяц назад
we want more video on fine tuning projects
@yashshukla7025
@yashshukla7025 Месяц назад
Can you make a good video around how to decide hyper parameters when training gpt 3.5
@arunkrishna1036
@arunkrishna1036 Месяц назад
After the fine tuning process in this video, isn't it the same old model that is used here test the queries? We should have tested the queries with the "new_model" isn't it?
@AntonyPraveenkumar
@AntonyPraveenkumar Месяц назад
Hi Krish, the video is really good and more understanding. but I have one reason how to you choose this the right dataset and why? why you choosing that format_func function to format the dataset into the some kind of format. if you have any tutorial or blog please share the link.
@rotimiakanni8436
@rotimiakanni8436 Месяц назад
Hi Krish. What device do you use to write on...like a board
@rebhuroy3713
@rebhuroy3713 Месяц назад
Hi Krish, i Have seen entire video. i am confused with 2terms. some times you said its possible to train with my own data (own data refers from a url , pdfs , simple text etc) but when you are trying to train the llm model you are giving inputs as in certain format like### question : ans. Now if i want to train my llm in real life scenario i don't have my data in this instruction format right in that case what to do. And its not possible to transform my raw text to into that format right how to handle that situation . is it a only way to fine tune in specific format or i can train given in raw text format i know a process where i need to convert my text to chunks then pass to llm. those are really confusing can you clear those things
@maximusrayvego2673
@maximusrayvego2673 18 часов назад
hey could you tell me what are the pre req to follow this crash course? it would be greatly beneficial!!
@rishiraj2548
@rishiraj2548 Месяц назад
🙏💯💯
@sibims653
@sibims653 Месяц назад
What documentation did you refer to in this video?
@anupampandey1235
@anupampandey1235 Месяц назад
hello krish sir thank you for amazing lecture can please share the notes of session
@rakeshpanigrahy7000
@rakeshpanigrahy7000 Месяц назад
Hi sir, I have tried your llama finetuning notebook to run on colab with free T4 gpu but it is throwing OOM error. So could you please guide
@adityavipradas3252
@adityavipradas3252 Месяц назад
RAG or fine-tuning? How should one decide?
@nitinjain4519
@nitinjain4519 Месяц назад
Can anyone suggest how to analyze audio for soft skills in speech using Python and llm models?
@SheiphanJoseph
@SheiphanJoseph Месяц назад
Please also provide the source. Research paper/Blog you might have referred for this video.
@miltonvai5816
@miltonvai5816 20 дней назад
Hello Sir, Hello, could you create a tutorial on fine-tuning vision-language models like LLaVA or Multimodal LLMs like IDEFICS for Visual Question Answering on datasets like VQA-RAD, including evaluation metrics? Please make a full step by step tutorial. Thanks in Advance!
@hassanahmad1483
@hassanahmad1483 Месяц назад
How to deploy these?...I have seen deployment of custom LLM models...how to do this?
@JunaidKhan80121
@JunaidKhan80121 Месяц назад
Can anybody tell me how to fine-tune llm for multiple tasks?
@rahulmanocha4533
@rahulmanocha4533 Месяц назад
If i would like to join data science community group where i can get the access, please let me know.
@nitishbyahut25
@nitishbyahut25 Месяц назад
Pre-requisites?
@stalinjohn721
@stalinjohn721 21 день назад
how to finetune and quantize the phi3 mini model ,
@abhaypatel2585
@abhaypatel2585 23 дня назад
actually sir this step cant able to run !pip install -q datasets !huggingface-cli login due to this dataset cant be load nd getting error in other step so is thier is any solution for this ?????
@deepaksingh-qd7xm
@deepaksingh-qd7xm Месяц назад
i dont know why i feel training a whole model from scratch is much more easier for me than to fine tune it ..............
@kartik1409
@kartik1409 24 дня назад
Ya if u see training the model from scratch for your dataset might look better and optimal but the energy is used in training a model from scratch is too much so finetuning a pretrained model is considered a better option than training model for specific data everytime....
@salihedneer8975
@salihedneer8975 Месяц назад
Prerequisite ?
@charithk2160
@charithk2160 Месяц назад
hey krish , can you by any chance share the notes used in the video. would be really helpful. thanks !!
@avdsuresh
@avdsuresh Месяц назад
Hello Krishna sir , Please make a playlist for genai and lanchain
@krishnaik06
@krishnaik06 Месяц назад
Already made please check
@avdsuresh
@avdsuresh Месяц назад
@@krishnaik06 Thank you for replying me
@shivam_shahi
@shivam_shahi Месяц назад
EK HI DIL HAI KITNE BAAR JITOGE SIR?
@navjeetkaur3014
@navjeetkaur3014 14 дней назад
@krishnaik06 WANDB_Disabled is for disabling weights and Bais of the current model
@RensLive
@RensLive Месяц назад
I understand this video just like your hairs sometime nothing some time something ❤🫠
Далее
[1hr Talk] Intro to Large Language Models
59:48
Просмотров 1,9 млн
MIT Introduction to Deep Learning | 6.S191
1:09:58
Просмотров 305 тыс.
AI vs ML vs DL vs Generative Ai
16:00
Просмотров 31 тыс.
I wish every AI Engineer could watch this.
33:49
Просмотров 56 тыс.