Тёмный
No video :(

LambdaNetworks: Modeling long-range Interactions without Attention (Paper Explained) 

Yannic Kilcher
Подписаться 261 тыс.
Просмотров 48 тыс.
50% 1

Опубликовано:

 

28 авг 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 80   
@lucidraisin
@lucidraisin 3 года назад
Lol, was not expecting a shoutout at the end :D Thanks for another great video!
@linminhtoo
@linminhtoo 3 года назад
good job! I really like the use of einsum.
@playfuladventurer
@playfuladventurer 3 года назад
thanks for the code! have you tried reproducing some of the results?
@nikitadiaconunichita
@nikitadiaconunichita 3 года назад
"Attention mechanism extremely briefly, extremely briefly" 🤣😅🤣😅 I guess it's for the people watching your videos for the first time. Love your content
@charlesfoster6326
@charlesfoster6326 3 года назад
In a nutshell, my understanding is the Lambda Layer works using a similar rearranging trick as in "Transformers are RNNs". Instead of doing attention over positions (i.e. NxN), it ends up doing attention over features (i.e. DxK). That's why it isn't O(N^2).
@charlesfoster6326
@charlesfoster6326 3 года назад
This is also why you need to change the positional encoding strategy to use a separate path. Otherwise it will be difficult for the network to properly route info based on positional information.
@3145mimosa
@3145mimosa 3 года назад
This is an excellent insight. Thank you!
@01FNG
@01FNG 3 года назад
It feels like the stage is set for a more general theory that can unify all of these ideas into one.
@AvastarBin
@AvastarBin 3 года назад
So much! It really feels like we're going around in circles but we're approaching to the right answer!
@scottmiller2591
@scottmiller2591 3 года назад
Thanks Yannic, you made my day.
@maxkleinebrahm2174
@maxkleinebrahm2174 3 года назад
Grate Channel!!! It would be nice to see a review paper/video comparing all the longformers, sparse transformers, linear transformers, linformers, reformers, performers, lambdanetworks, ...
@luckyshadowtux
@luckyshadowtux 3 года назад
That would be great
@yuangwang7772
@yuangwang7772 3 года назад
I was just thinking about the next Transformer Yannic would cover and here it comes!
@FREELEARNING
@FREELEARNING 3 года назад
Interesting explanation, Looking for Yannic Kilcher V2 i.e Code explainer maybe if you could add 10 to 15 min at the end to explain original paper codes. That would be much helpful. Thank you for the great effort.
@linminhtoo
@linminhtoo 3 года назад
this would be super helpful, especially to budding researchers!
@hitomihilbert5359
@hitomihilbert5359 3 года назад
I think the most important thing is in Appendix C : "LambdaNetworks can alternatively be viewed as an extension of HyperNetworks (Ha et al., 2016) that dynamically compute their computations based on the inputs contexts." It may be much easier to understand the paper from this perspective XD
@Navhkrin
@Navhkrin 3 года назад
Another day, another great paper explanation.
@drhilm
@drhilm 3 года назад
I was in the middle of asking Yanic to do this paper, and before I even finished - walla!
@Cl0udn1n3
@Cl0udn1n3 Год назад
“This is Quick Ben’s game, O Elder. The bones are in his sweaty hands and they have been for some time. Now, if at his table you’ll find the Worm of Autumn, and the once Lord of Death, and Shadowthrone and Cotillion, not to mention the past players Anomander Rake and Dessembrae, and who knows who else, well - did you really believe a few thousand damned Nah’ruk could take him down? The thing about Adaephon Delat’s game is this: he cheats.” To give the turtles some prime reading material..
@nakshatrasingh8204
@nakshatrasingh8204 3 года назад
An implementation of Performer, a linear attention-based transformer variant with a Fast Attention Via positive Orthogonal Random features approach (FAVOR+) video????????????
@nicolascarrara4890
@nicolascarrara4890 3 года назад
Please keep the nice work!
@weizhu2230
@weizhu2230 3 года назад
this is pretty similar to diff pooling in gnn, where we just get an indicator matrix through some blackbox transformation.
@anuragmalyala4863
@anuragmalyala4863 3 года назад
noob question: which app are you using for the paper annotations?
@YannicKilcher
@YannicKilcher 3 года назад
OneNote
@kajalsinha2468
@kajalsinha2468 3 года назад
I am here after 40 seconds of upload
@that_guy4690
@that_guy4690 3 года назад
27:36 I guess there's (matrix of shape k x v) instead of - a scalar
@drukeri2
@drukeri2 3 года назад
23:30 - The authors fix this mistake. probably thanks to you Yannic (:
@elipersky1591
@elipersky1591 3 года назад
I know you said to ignore it, but what does the intra-depth hyperparameter actually mean?
@user-ks5tx6wk5j
@user-ks5tx6wk5j 3 года назад
I think intra-depth is the intermediate dimension in the query, perhaps understood as the weight of each key relative to value as it acts on the information of the context element.
@gabby.suwichaya
@gabby.suwichaya 3 года назад
Hi, I am quite new to the basic transformer .... And there seem to be many new transformers recently. Could any please share the fundamental video for the Attention? I am interested to see where it begins...
@adizhol
@adizhol 3 года назад
So the lambda function is basically like the Embedding matrix E for text sequences? They learn embeddings of patches/pixels?
@YannicKilcher
@YannicKilcher 3 года назад
In. away, yes
@granttao7504
@granttao7504 3 года назад
sir, you are wrong. lambda is not the direct result of matrix multiplication of K and V transpose, for each element you get a kxd (as in your notation) matrix from multiplying the transpose of each row of K and each row of V, adding m matrixes together, you get the lambda.
@TheNuttyNetterAlexLamb
@TheNuttyNetterAlexLamb 3 года назад
Why do you keep going on about the "double-blind reviewing" thing? In ML right now, double blind reviewing gives the author an opt-in ability to protect their identity. Moreover, the author is not clearly listed on the page, so a reviewer won't know it by default. Reviewers and readers still have the option to search for the name of the paper and find it if it's on arxiv, where authors have the right to post it. I think this system is a good compromise, since it gives a pretty good amount of anonymity, especially for those who want the anonymity, and it doesn't restrict the authors much.
@pierregutierrez4332
@pierregutierrez4332 3 года назад
I may be mistaken but: the speed-accuracy chart they show is unclear. Are we talking about inference speed (I'm really unsure, the paper is not clear)? If so, how come baseline resnet and resnet+se seem better than efficientnet (all appear on the top left of the curve, contradicts the effnet paper)? Could it be because of the use of a bag of tricks during training (ex data augmentation)? In that case, the performance cannot be claimed to come from the architecture used. Also it seems it would contradicts table 6 where we see the amount of flops is marginally reduced for comparable accuracy. As a result, the 4.5x speedup they claim seems a bit missleading.
@YannicKilcher
@YannicKilcher 3 года назад
true, I don't know either
@yaaank6725
@yaaank6725 3 года назад
Bunch of ideas jumped into my head to apply it in vision, since the dimension problem can be solved. Time to hoarding up papers bois!
@CristianGarcia
@CristianGarcia 3 года назад
This architecture has the downside that it has a fixed sequence length due to the learned positional embeddings, being independent of the sequence length is a nice property of MHA in the original Transformer.
@Neural_Causality
@Neural_Causality 3 года назад
just in case: MHA = Multi-Headed Attention
@Atlantis357
@Atlantis357 3 года назад
as long as the pictures are the same resolution that shouldnt be a big problem, right?
@CristianGarcia
@CristianGarcia 3 года назад
@@Atlantis357 yeah, for images it shouldn't matter much since you can always resize. But for text you don't have that luxurty.
@PrasenjeetRoyMPAI
@PrasenjeetRoyMPAI 3 года назад
@@CristianGarcia We can do padding in case of texts. Please correct me if I am going wrong.
@austinmw89
@austinmw89 3 года назад
Hey, what about the diff between local attention and deformable convolutions (DCN)?
@SystemsMedicine
@SystemsMedicine 3 года назад
Hi Yannic. Loading a large ram with a 40Kx40K matrix is certainly possible, even on modern pc pseudo-home computers. I know it sound ridiculous, but consider... a 4 stick ddr4 ram of size 512 giga bytes is about US$ 3500. A Supermicro motherboard may have 16 ddr4 memory stick slots. This means for about US$ 14000, one may have 2 tera bytes of RAM in a "home" computer. This is certainly enough to contain the matrices in question, and fast enough to do the operations. Some very special programming may be required, but if the task were very easy, it would have been done a long time ago. (Or wait 2 or 3 years, and the price will drop in half.) If you don't want to pay so much for memory, you might page the matrices in and out of disk memory. A 16tb Red Drive is about US$ 350. If you raid 4 of these together, you will have something like 64 tera bytes of disk space to work with. Your giant multiplies will take time, but so what? If you have access to a modern supercomputer, you may just be able to write something like Matlab code to do the job, tho I don't know if Matlab is set up for large matrices, or if you will have to get someone to recompile the Matlab kernel (a tough thing). In any event, it might be worth attempting to analyze some images with this direct very brute force method. Sometimes brute force is fun. And the sheer exercise of it might be valuable for the right student or researcher. Btw, a fourier xform, or a laplace xform, or some such a thing, of critical parts of the algorithm may make the whole thing a LOT more tractable; although, you would have to work for a while to show this. Cheers. Also: cool channel, dude.
@AllTheFishAreDead
@AllTheFishAreDead 3 года назад
Seems a lot like it's replacing a pooling layer with a learnable fn
@lone0017
@lone0017 3 года назад
Hey, I would like to know what app are you using to annotate the pdf. Thank you.
@deiviuds
@deiviuds 3 года назад
Which software do you use to annotate this PDFs?
@YannicKilcher
@YannicKilcher 3 года назад
OneNote
@moustafa_shomer
@moustafa_shomer 3 года назад
did you make a video about efficient Nets ? i can't seem to find it
@YannicKilcher
@YannicKilcher 3 года назад
not yet
@gangmuklim9308
@gangmuklim9308 3 года назад
hello, Yannic. Thank you for your great review videos! I am currently doing Master degree and want to apply phD to ETH Zurich, once everything goes fine. However, I have almost no information how phD life is in ETH. Would you mind if I ask some question about phD course in ETH via email? (Actually, I couldn't find your email account on web.) Thanks.
@YannicKilcher
@YannicKilcher 3 года назад
you can dm me on twitter / linkedin
@cycman98
@cycman98 3 года назад
What is this program that you're using to annotate pdfs?
@YannicKilcher
@YannicKilcher 3 года назад
OneNote
@cycman98
@cycman98 3 года назад
@@YannicKilcher thx
@sohaibattaiki9579
@sohaibattaiki9579 3 года назад
Hi, Thank you for the great videos. I have a question, what is the name of the software you are using to read and annotate the paper?
@tresuvesdobles
@tresuvesdobles 3 года назад
It is OneNote, most likely
@Neural_Causality
@Neural_Causality 3 года назад
He did a video on the tools (including all the software) he uses, and another on how he reads papers, you can find those on his channel
@sohaibattaiki9579
@sohaibattaiki9579 3 года назад
Thank you for your responses. @daniel do you know what is the name of the video please!
@Neural_Causality
@Neural_Causality 3 года назад
@@sohaibattaiki9579 ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-H3Bhlan0mE0.html&ab_channel=YannicKilcher ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-Uumd2zOOz60.html&ab_channel=YannicKilcher
@herp_derpingson
@herp_derpingson 3 года назад
33:40 If the keys are "fixed", doesnt that make it equivalent to a convolutional kernel? Quite a large number of papers try to get rid of the quadratic attention but I strongly believe that there is some no-free-lunch effect going on. You actually need the bandwidth of a quadratic attention so that enough information can be backpropagated.
@veedrac
@veedrac 3 года назад
Performers claim to be a provably accurate approximation, so idk about that.
@charlesfoster6326
@charlesfoster6326 3 года назад
Fair, but if the pattern you're looking for is relatively low frequency (i.e big), bandwidth may not be a problem, since you already need to throw out high frequency details.
@YannicKilcher
@YannicKilcher 3 года назад
yea that's a reasonable claim, maybe not the convolution we know, but kind of
@BrainSlugs83
@BrainSlugs83 2 года назад
RE: "You can't just wait longer and have more memory..." Uhhh... this is exactly what swap files were invented for... -- the issue is that the software wants the entire transformer network in memory all at once... which is just a silly limitation of the software.
@MrJaggy123
@MrJaggy123 3 года назад
Tldr; I didn't completely hate this paper because I didn't completely understand it 😉
@MrJaggy123
@MrJaggy123 3 года назад
Before assuming I'm throwing shade, go to 01:48 in the video 😛
@VaclavKosar
@VaclavKosar 3 года назад
Here are simplified the equations from the paper: vaclavkosar.com/ml/Lamda-Networks-Transform-Self-Attention
@Lazauya
@Lazauya 2 года назад
Why do so many papers not have these nice diagrams for how all their variables interact with each other, like you outlined here? It feels almost intentionally obtuse.
@JurekOK
@JurekOK 3 года назад
nice :-)
@rgarthwood3881
@rgarthwood3881 3 года назад
Thanks for the videos. Yannic, can you play this back to yourself on high volume? You'll notice that your swallowing is **extremely** loud. This is because your mic is likely right next to your mouth - maybe back up a foot or two? Again, love your posts but they're really hard to listen to sometimes.
@dropoutjeep4193
@dropoutjeep4193 3 года назад
Can I join your discord?
@YannicKilcher
@YannicKilcher 3 года назад
sure, link in the description
@yoloswaggins2161
@yoloswaggins2161 3 года назад
No offense to the first plot but who cares about latency in training
Далее
The U-Net (actually) explained in 10 minutes
10:31
Просмотров 98 тыс.
Why Does Diffusion Work Better than Auto-Regression?
20:18