Тёмный
No video :(

Llama - EXPLAINED! 

CodeEmporium
Подписаться 126 тыс.
Просмотров 31 тыс.
50% 1

Опубликовано:

 

29 авг 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 66   
@CodeEmporium
@CodeEmporium Год назад
Would you like to see more videos on Llama? Let me know. Have a wonderful day :)
@paisanareeprasertkul1950
@paisanareeprasertkul1950 Год назад
Yes, definitely. One of the best explanations of the topic!
@manusrivastava2047
@manusrivastava2047 Год назад
great video, love the well structured and informative nature they have. Would love to see how to use word embeddings from Llama2 or other language model for transfer learning. Thanks and keep up the good work!
@ozne_2358
@ozne_2358 Год назад
Yes, please. More details on the code, how the parameters are initialized from the parameter file and used in the various stages.
@scitechtalktv9742
@scitechtalktv9742 11 месяцев назад
I am struggling to have llama 2 working with Dutch language reliably, so you can pose questions in Dutch and have llama 2 give the answer in Dutch. (This will be due to the fact that llama 2 is trained on data that contains very little Dutch language). I have had some succes using special prompts to do that, but sometimes it switches back to English unexpectedly. What technique(s) can I use to solve this? My use case is: I have Dutch texts that I want to be able to pose questions to in Dutch by means of Retrieval Augmented Generation (RAG) (using a llama 2 LLM) and get answers in correct Dutch?
@user-yi8vs7lb7d
@user-yi8vs7lb7d 11 месяцев назад
I'm waiting the video!
@aurkom
@aurkom Год назад
Would love a deep dive into stuff like LoRA and quantization (bitsandbytes library) as well. Perhaps, doing it from scratch in pytorch!
@CodeEmporium
@CodeEmporium Год назад
Perfect. I have coded out the transformer from scratch using PyTorch. Maybe I’ll think of a similar series for llama :)
@jeswer9
@jeswer9 9 месяцев назад
Yes please more deep dive into the code! Super valuable video because of that part.
@pipinstallyp
@pipinstallyp Год назад
Hey, thanks a lot for your videos. Your video - transformer attention is all you need helped me build an intuition back before transformers were really cool. It's lovely to see your video on llama, as I actively get to finetune llama on day to day basis :) Much love.
@CodeEmporium
@CodeEmporium Год назад
Super happy to hear! Thanks so much for watching :)
@share4713
@share4713 Год назад
The more i watch videos , the more i understand a subject, this is propably because i Can now see the subject in different angles or perspectives, now i have a better intuition of transformer architectures and i Can code it from scratch, thank you.
@abhijitnayak1639
@abhijitnayak1639 11 месяцев назад
Thank you for such an insightful video. Would definitely love a deep-dive video on the architecture and code of LLama 2. Could you please also do an implementation of BERT or RoBERTa fine-tuning (the training process optimized via deepspeed) . Thanks again!!
@dan1ar
@dan1ar Год назад
Great video! Looking forward to deep dive into llama code
@CodeEmporium
@CodeEmporium Год назад
Sure thing. I have slated it on my TODOs :) Thank you for watching
@dollarscholar2956
@dollarscholar2956 Год назад
Clear, informative, well presented. Great video!
@CodeEmporium
@CodeEmporium Год назад
Thanks so much for commenting:)
@YashVerma-ii8lx
@YashVerma-ii8lx 7 месяцев назад
Thank you so much for explaining brother! Would be really great if you could give a code walkthrough video as well!
@dikshyakasaju7541
@dikshyakasaju7541 Год назад
Very informative!! Would be sick if you could dive deeper.
@CodeEmporium
@CodeEmporium Год назад
Yes! Thanks for watching! Will think about if as a future video / series
@prasadraavi390
@prasadraavi390 8 месяцев назад
Beautifully Explained. Thank you. Yes, I want to know more about its architecture too.
@gopalakrishna9651
@gopalakrishna9651 7 месяцев назад
yes. please. deep dive arch. and code walkthrough if possible. Thanks a lot for the video. May gods blessing be with you.
@dinoscheidt
@dinoscheidt Год назад
Commenting for the algorithm. Very well explained. You have a talent !
@CodeEmporium
@CodeEmporium Год назад
Much appreciated ! Thank you!
@steel-r_ua
@steel-r_ua 5 месяцев назад
Thanks for the great video and a GREAT way of presenting data and showing the code!
@naevan1
@naevan1 9 месяцев назад
amazing work man. one of my favourite deep learning creators!
@DaTruAndi
@DaTruAndi Год назад
I think you didn’t describe RLHF fully. What you described was more SFT, you seemingly skipped mentioning the reward model explicitly. Maybe implicitly you meant it, but it could help to clarify this part of reinforcement learning
@rogermenezes
@rogermenezes Год назад
He has a very good series called "chatGPT explained" where he goes into detailed explanation of RHLF: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-_MPJ3CyDokU.html
@CodeEmporium
@CodeEmporium Год назад
Yea that’s true. I mentioned this as “humans determining what is a better answer” when I probably should have said “humans determine the better answer to train the rewards model (s) and this in turn is used with the original fine tuned model to further fine tune it. And this happens via some proximal policy optimization” ~ or maybe something along these lines. Thanks for pointing it out. I’ll clarify this on some follow up videos in the near future too
@abzs5811
@abzs5811 6 месяцев назад
@@CodeEmporiumlost me fam
@alexandertakele7528
@alexandertakele7528 7 месяцев назад
Thank you so much
@prasadraavi390
@prasadraavi390 8 месяцев назад
Beautifully explained. Thank you.
@popamaji
@popamaji Год назад
I have not implemented the code for decoder only so I have 3questions: 1. so it uses the triangular mask? I have heard from 2 sources which it does, but I dont get it, as we only feed inputs and not the outputs(unlike original transformer),how triangular mask on input data makes sense? 2. does why its called `decoder only`? the architecture seems much closer to encoder part of original transformer model, than its decoder part!! specially when the mask also not different than encoder of original. 3. is it autoregressive or still can be autoencoder to output the outputs in one pass?
@popamaji
@popamaji Год назад
please make a video about how the generative feature and how the reinforcement learning is used in language models?
@spydeyftw
@spydeyftw 11 месяцев назад
Good explanation with proper understanding !
@shreyojitdas9333
@shreyojitdas9333 18 дней назад
please we need a deep dive sir
@andresg297
@andresg297 8 месяцев назад
Excellent explanation. Thank you
@adarshsaurabh7871
@adarshsaurabh7871 Год назад
Can you please help me. I have multiple doubts. As all of these models are LLM and these generated next words based on the previous words, can I find tune them on any type of data, for example I like to make a model which can make poems, shayeri for me so can I train these for this task. Also as llama doesn't have an encoder. Isn't it a disadvantage. Also can you please make a video on encoder and decoder and their specific details. Please 🤓🤓
@ajaytaneja111
@ajaytaneja111 Год назад
Hi Ajay, would love to hear your insights on PEFT - the theoretical aspects of course. I have seen a lot of videos on PEFT and some reading too. The theoretical aspects are not well explained.
@CodeEmporium
@CodeEmporium Год назад
Ajay! Yea for sure. I am interested to learn more about this too. I’ll read more and make some content on this soon :)
@ruksharalam173
@ruksharalam173 11 месяцев назад
It'd be great if you could please dig deeper into llama code and architecture.
@jiaxingyu8300
@jiaxingyu8300 11 месяцев назад
Nice explanation!
@lakshman587
@lakshman587 5 дней назад
Please detailed explanation!!
@xinyaoyin2238
@xinyaoyin2238 День назад
it is just a nerfed but faster transformer
@NicholasRenotte
@NicholasRenotte Год назад
1.8k and closing in my boi!!!!
@CodeEmporium
@CodeEmporium Год назад
Ma guy. I will join the ranks of the 6 digit sub counts
@tunkskabulungana46
@tunkskabulungana46 4 месяца назад
You said llama is an 8 language model, which prg.langs are they?😮
@naevan1
@naevan1 9 месяцев назад
would you be intersted in making a guide of finetuning llamma2 or you thin kit is oversaturated?
@younessamih3188
@younessamih3188 Год назад
Very helpful ! that will be great ...
@CodeEmporium
@CodeEmporium Год назад
Thanks so much! I’ll think of a deep dive as a future video / series
@StrangeMemes52
@StrangeMemes52 Год назад
wow , amazing video 😁 , so how language modle after training fine-tuned , i mean how works this fine_tune ?
@CodeEmporium
@CodeEmporium Год назад
Fine tuning is done depending on the specific task you want. In llama chat and ChatGPTs case, we want the fine tuning on question answering. So we feed the model a bunch of questions + answer pairs and the model parameters are “fine tuned”. Hope this helps.
@popamaji
@popamaji Год назад
is this decoder with simplified form?!?!!?!? or its encoder with decoder mask?
@azai.online
@azai.online 11 месяцев назад
I do like Llama 2 and found it easy to use. I am using it in my own multi application platform and its great.
@ajaytaneja111
@ajaytaneja111 Год назад
Hi Ajay, I have been reading Llama 2 research paper. They talk a lot of safety during pre-training as you might have seen. Do you think they score over GPT in this aspect?
@CodeEmporium
@CodeEmporium Год назад
Yea. That 77 page dissertation in llama 2 definitely makes the claim that it is safer. They have sections and infographics dedicated to showing this as well. That said, I would need to check how much of this safety is incorporated in the pre training as well. I didn’t think there would be much in this phase. But I haven’t read the entire dissertation, so I may be wrong.
@ajaytaneja111
@ajaytaneja111 Год назад
Hi Ajay, I suppose they do grouped query Attention and not multi head attention
@CodeEmporium
@CodeEmporium Год назад
I’ll need to check the fine grained details out. Thanks for the heads up. If so, I’ll address this in that future video
@ajaytaneja111
@ajaytaneja111 Год назад
Thanks for the response, Ajay. As always, great video.
@jackhale8497
@jackhale8497 11 месяцев назад
😢 "Promo sm"
Далее
Embeddings - EXPLAINED!
12:58
Просмотров 7 тыс.
Haaland Showed Me How It's Done.. 😜
00:11
Просмотров 4,1 млн
I Analyzed My Finance With Local LLMs
17:51
Просмотров 469 тыс.
How AI 'Understands' Images (CLIP) - Computerphile
18:05
ML Was Hard Until I Learned These 5 Secrets!
13:11
Просмотров 282 тыс.
ChatGPT Explained Completely.
27:39
Просмотров 1,2 млн
What is RAG? (Retrieval Augmented Generation)
11:37
Просмотров 133 тыс.
Run your own AI (but private)
22:13
Просмотров 1,4 млн
Watching Neural Networks Learn
25:28
Просмотров 1,3 млн