Тёмный

What is LoRA? Low-Rank Adaptation for finetuning LLMs EXPLAINED 

AI Coffee Break with Letitia
Подписаться 49 тыс.
Просмотров 43 тыс.
50% 1

Опубликовано:

 

26 сен 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 72   
@MikeTon
@MikeTon 8 месяцев назад
Insightful : Especially the comparison from LORA to prefix tuning and adapters at the end!
@AICoffeeBreak
@AICoffeeBreak 8 месяцев назад
Thank you! Glad you liked it.
@soulfuljourney22
@soulfuljourney22 3 месяца назад
Concept of rank of a matrix,tauught in such an effective way
@AICoffeeBreak
@AICoffeeBreak 3 месяца назад
Cheers!
@wholenutsanddonuts5741
@wholenutsanddonuts5741 Год назад
I’ve been using LoRAs for a while now but didn’t have a great understanding of how they work. Thank you for the explainer!
@wholenutsanddonuts5741
@wholenutsanddonuts5741 Год назад
I assume this works the same for diffusion models like stable diffusion?
@AICoffeeBreak
@AICoffeeBreak Год назад
For any neural network. You just need to figure out based on your application which matrices you should reduce and which not.
@wholenutsanddonuts5741
@wholenutsanddonuts5741 Год назад
@@AICoffeeBreak so super easy then! 😂 seriously though that’s awesome to know!
@michelcusteau3184
@michelcusteau3184 8 месяцев назад
By far the clearest explanation on youtube
@AICoffeeBreak
@AICoffeeBreak 8 месяцев назад
Thank you very much for the visit and for leaving this heartwarming comment!
@elinetshaaf75
@elinetshaaf75 7 месяцев назад
true!
@SoulessGinge
@SoulessGinge 7 месяцев назад
Very clear and straightforward. The explanation of matrix rank was especially helpful. Thank you for the video.
@AICoffeeBreak
@AICoffeeBreak 7 месяцев назад
Thank You for the visit! hope to see you again soon!
@rockapedra1130
@rockapedra1130 Год назад
Perfect. This exactly what I wanted to know. "Bite-sized" is right!
@minkijung3
@minkijung3 6 месяцев назад
Thanks Letitia. Your explanation was very clear and helpful to understand the paper.
@AICoffeeBreak
@AICoffeeBreak 6 месяцев назад
I'm so glad it's helpful to you!
@deviprasadkhatua
@deviprasadkhatua 8 месяцев назад
Excellent explaination. Thanks!
@AICoffeeBreak
@AICoffeeBreak 8 месяцев назад
Glad you enjoyed it!
@Lanc840930
@Lanc840930 Год назад
Very comprehensive explanation! Thank you
@Lanc840930
@Lanc840930 Год назад
Thanks a lot. And I have a question for “linear dependence ”, is this mention in original paper?
@AICoffeeBreak
@AICoffeeBreak Год назад
The paper talks about the rank of a matrix, so about linear dependency between rows / columns.
@Lanc840930
@Lanc840930 Год назад
oh, I see! Thank you 😊
@darrensapalo
@darrensapalo 2 месяца назад
Great explanation. Thank you!
@m.rr.c.1570
@m.rr.c.1570 7 месяцев назад
Thank you for clearing my concepts regarding LoRA
@AICoffeeBreak
@AICoffeeBreak 7 месяцев назад
@ambivalentrecord
@ambivalentrecord Год назад
Great explanation Letitia
@AICoffeeBreak
@AICoffeeBreak Год назад
Glad you think so! 😄
@keshavsingh489
@keshavsingh489 Год назад
So simple explanation, thank you soo much!!
@karndeepsingh
@karndeepsingh Год назад
Thanks again for amazing video. I would also request a detailed video on Flash Attention. Thanks
@AICoffeeBreak
@AICoffeeBreak Год назад
Noted. It's on The List. Thanks! 😄
@kindoblue
@kindoblue Год назад
Loved the explanation. Thanks
@outliier
@outliier Год назад
What a great topic!
@DerPylz
@DerPylz Год назад
Yay, thanks!
@jarj5313
@jarj5313 4 месяца назад
THANKS THAT WAS GREAT EXPLANATION
@amelieschreiber6502
@amelieschreiber6502 Год назад
LoRA is awesome! It also helps with overfitting in protein language models as well. Cool video!
@AnthonyGarland
@AnthonyGarland Год назад
Thanks!
@AICoffeeBreak
@AICoffeeBreak Год назад
Wow, thanks a lot! 😁
@varun-h9e
@varun-h9e 8 месяцев назад
Firstly thanks for the amazing video. Can you also make a video about QLoRA.
@butterkaffee910
@butterkaffee910 Год назад
I love lora ❤ even for vit's
@deepak_kori
@deepak_kori 9 месяцев назад
You are just amazing >>> so beautiful so elegant just wow😇😇
@bdennyw1
@bdennyw1 Год назад
Fantastic video as always. QLora is even better if you are GPU poor like me.
@dr.mikeybee
@dr.mikeybee Год назад
If we knew what abstractions were handled layer by layer, we could make sure that the individual layers were trained to completely learn those abstractions. Let's hope Max Tegmark's work on introspection get us there.
@alirezafarzaneh2539
@alirezafarzaneh2539 4 месяца назад
Thanks for the simple and educating video! If I'm not mistaken, prefix tuning is pretty much the same as embedding vectors in diffusion models! How cool is that? 😀
@Micetticat
@Micetticat Год назад
LoRA: how can it be so simple? 🤯
@AICoffeeBreak
@AICoffeeBreak Год назад
Kind of tells us that fine-tuning all parameters in an LM is overkill.
@terjeoseberg990
@terjeoseberg990 9 месяцев назад
I thought this was long range wide band radio communications.
@AICoffeeBreak
@AICoffeeBreak 9 месяцев назад
🤣🤣
@ArunkumarMTamil
@ArunkumarMTamil 4 месяца назад
how is Lora fine-tuning track changes from creating two decomposition matrix? How the ΔW is determined?
@floriankowarsch8682
@floriankowarsch8682 Год назад
As always amazing content! 😌 It's perfect to refresh knowledge & learn something new. I think interesting about LoRA is how strong it actually regularizes fine-tuning: Is it possible it overfit when using a very small matrix in LoRA? Can LoRA also harm optimization?
@TheRyulord
@TheRyulord Год назад
Still possible to overfit but more resistant to overfitting compared to a full finetune. All the work I've seen on LoRAs say that it's just as good as a full finetune in terms of task performance as long as your rank is high enough for the task. What's interesting is that the necessary rank is usually quite low (around 2) even for relatively big models (llama 7B) and reasonable complex tasks. At least that's all the case for language modelling. Might be different for other domains.
@AIShipped
@AIShipped Год назад
Why use weight matrixes to start with if you can use lora representation? Assuming you gain space, the only downside I can think of is the additional compute to get back the weight matrix. But that should be smaller then the gain of the speed up of backward propagation.
@AICoffeeBreak
@AICoffeeBreak 11 месяцев назад
Thanks for this question. You do not actually start with the weight matrices, you learn A and B directly from which you reconstruct the delta W matrix. Sorry this was not clear enough in the video.
@alislounge
@alislounge 4 месяца назад
Which one is the most and which one is the least 'compute efficient'? Adapters, Prefix Tuning or LORA?
@onomatopeia891
@onomatopeia891 7 месяцев назад
Thanks! But how do we determine the correct rank? Is it just trial and error with the value of R?
@AICoffeeBreak
@AICoffeeBreak 7 месяцев назад
Exactly. At least so far. Maybe some theoretical understanding will come up in time.
@kunalnikam9112
@kunalnikam9112 5 месяцев назад
In LoRA, Wupdated = Wo + BA, where B and A are decomposed matrices with low ranks, so i wanted to ask you that what does the parameters of B and A represent like are they both the parameters of pre trained model, or both are the parameters of target dataset, or else one (B) represents pre-trained model parameters and the other (A) represents target dataset parameters, please answer as soon as possible
@yacinegaci2831
@yacinegaci2831 10 месяцев назад
Great explanation, thanks for the video! I have a lingering question about LoRA: Is it necessary to approximate the low-rank matrices of the difference weights (the Delta W in the video). Or can we reduce the size of the original weight matrices? If I understood the video correctly, at the end of LoRA training, I have the full parameters of the roginal model + the difference weights (in reduced size). My question is why can't I learn low rank matrices for the original weights as well?
@AICoffeeBreak
@AICoffeeBreak 9 месяцев назад
Hi, in principle you can, even though I would expect you could lose some model performance. The idea of finetuning with LoRA is that the small finetuning updates should have low rank. matrices. BUT there is work using LoRA for pretraining, called ReLoRA. Here is the paper 👉 arxiv.org/pdf/2307.05695.pdf There is also this discussion on Reddit going on: 👉 www.reddit.com/r/MachineLearning/comments/13upogz/d_lora_weight_merge_every_n_step_for_pretraining/
@yacinegaci2831
@yacinegaci2831 7 месяцев назад
@@AICoffeeBreak Oh, that's amazing. Thanks for the answer, for the links, and for your great videos :)
@mkamp
@mkamp Год назад
Absolutely awesome explanation. Would like to get your take on LoRA vs (IA)**3 as well. It seems that people still prefer LoRA over (IA)**3 even though the latter has a slightly higher performance?
@ryanhewitt9902
@ryanhewitt9902 11 месяцев назад
Aren't we effectively using the same kind of trick when we train the transformer encoder / self-attention block? Assuming row vectors, we can use the form W_v⋅v.T⋅k⋅W_k.T⋅W_q⋅q.T. Ignoring the *application* of attention and focusing its calculation, we get the form k⋅W_k.T⋅W_q⋅q.T . Since W_k and W_q are projection matrices from embedding length to dimension D_k, we have the same sort of low rank decomposition where D_k corresponds to "r" in your video. Is that right?
@mesochild
@mesochild 5 месяцев назад
what do i have to learn to understand this help please
@davidromero1373
@davidromero1373 11 месяцев назад
Hi a question, can we use lora to just reduce the size of a model and run inference, or we have to train it always?
@AICoffeeBreak
@AICoffeeBreak 11 месяцев назад
LoRA just reduces the size of the trainable parameters for fine-tuning. But the number of parameters of the original model stays the same.
@dineth9d
@dineth9d 11 месяцев назад
Thanks!
@AICoffeeBreak
@AICoffeeBreak 11 месяцев назад
Welcome! :)
@pranav_tushar_sg
@pranav_tushar_sg 9 месяцев назад
thanks!
@AICoffeeBreak
@AICoffeeBreak 9 месяцев назад
You're welcome!
@thecodest2498
@thecodest2498 7 месяцев назад
Thank you sooooo much for this video. I started reading the paper, was very terrified by it, then I thought I should watch some RU-vid video, watch one video, was asleep half-way through the video. Woke up again and stumbled across your video, your coffee woke me up and now I got the LoRA. Thanks for your efforts.
@AICoffeeBreak
@AICoffeeBreak 7 месяцев назад
Wow, this warms my coffee heart, thanks!
Далее
LoRA & QLoRA Fine-tuning Explained In-Depth
14:39
Просмотров 41 тыс.
Свожу все свои тату (abricoss_a_tyt)
00:35
Kenji's Sushi Shop Showdown - Brawl Stars Animation
01:55
How positional encoding works in transformers?
5:36
Transformers explained | The architecture behind LLMs
19:48
RAG vs. Fine Tuning
8:57
Просмотров 17 тыс.