Тёмный
No video :(

Nyströmformer: A Nyström-Based Algorithm for Approximating Self-Attention (AI Paper Explained) 

Yannic Kilcher
Подписаться 261 тыс.
Просмотров 17 тыс.
50% 1

Опубликовано:

 

28 авг 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 70   
@andreassyren329
@andreassyren329 3 года назад
If nothing else, the contribution to model naming is a clear increment to SOTA.
@jonatan01i
@jonatan01i 3 года назад
Nyströmer clearly is.
@andreassyren329
@andreassyren329 3 года назад
@@jonatan01i I will agree with that.
@VikasSingh-jv7fn
@VikasSingh-jv7fn 3 года назад
Hello Yannic, Your comment about the order of operations is correct. It is one of those things where you set out to check how poorly it performs and find out that it could work empirically (at least in limited settings). The lemma is not practically useful but merely evaluates the setting that if/when everything is idealized, the results/procedure does not lead to nonsensical conclusions. The choice of F early on in the paper was to avoid a conflict with D (D and d were both used) and E (ones matrix).
@tchlux
@tchlux 3 года назад
What about the A^+ comment, was that actually a typo in the paper? ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-m-zrcmRd7E4.html
@zhanpengzeng8592
@zhanpengzeng8592 3 года назад
@@tchlux Yes, that is a typo. We somehow left out the pseudo inverse sign.
@xiongyunyang9643
@xiongyunyang9643 3 года назад
@@tchlux This is a typo. We will update it. Thanks for the catch.
@JoelHough
@JoelHough 3 года назад
I have seen this exact type of lemma in many discussions about approximations. It did not seem out of place to me. It's nice to know that in the limit your approximation will agree with the ground truth which is certainly not the case in all approximation methods.
@xiongyunyang9643
@xiongyunyang9643 3 года назад
Thanks for making this great video. Nice catch for the typo. We will update the draft soon.
@herp_derpingson
@herp_derpingson 3 года назад
0:42 Nyan-storm-former! . 3:30 Time for my weekly Transformer explanation :) . 27:00 That was a really sweet and easy to understand explanation. . 35:00 I wonder if we can have a DNN just predict a landmark tensor..
@mdmishfaqahmed8356
@mdmishfaqahmed8356 3 года назад
the pronounciation of those author names were clutch :D
@RobEnglebright
@RobEnglebright 3 года назад
top work pronouncing the authors
@lucidraisin
@lucidraisin 3 года назад
Lol, unexpectedly mentioned 😅 thanks for the video!
@jamiekawabata7101
@jamiekawabata7101 3 года назад
If I lift box 1 onto shelf 1 and box 1 onto shelf 2 and box 2 onto shelf 1, then I can predict the effort in lifting box 2 onto shelf 2. Great analogy, thank you.
@mathematicalninja2756
@mathematicalninja2756 3 года назад
What I hear: nice transformer
@G12GilbertProduction
@G12GilbertProduction 3 года назад
Sweet Thursday with more sweety kind of paper. Buon appetite, Yannic! :)
@Anirudh-cf3oc
@Anirudh-cf3oc 2 года назад
Very nice explaination sir, Thank you!!
@user-qu2oz2ut2h
@user-qu2oz2ut2h 3 года назад
that's a nice trömformer
@poesaste
@poesaste 3 года назад
Incredible breakdown, subbed!
@mizupof
@mizupof 3 года назад
By the power of Yannic, I rename you!
@benjaminho351
@benjaminho351 3 года назад
Nobody: Yannic: uöu 1:07
@NilabhraRoyChowdhury
@NilabhraRoyChowdhury 3 года назад
I bet you wish you could: from previous_videos import SelfAttention every time you make a video related to transformers
@scarcommander5517
@scarcommander5517 3 года назад
We like this transformer!
@MausamJain
@MausamJain 3 года назад
How did you import pdf to onenote with such a good quality? The printout option generally inserts a very poor quality images of the pages.
@YannicKilcher
@YannicKilcher 3 года назад
It's definitely poor for me, too, it's right on the edge of being useful.
@chaitanyaparmar888
@chaitanyaparmar888 2 года назад
Love this video!
@osiris42
@osiris42 3 года назад
Does it even matter that the softmax doesn't commute, if the softmax is just a heuristic / hack in the first place? Or is there something inherintly special about softmax in the transformer architecture?
@tchlux
@tchlux 3 года назад
I don't know if I'd call it "special", but I like to think of it geometrically. When you use a softmax, you make it so that the layer immediately after the softmax only has to model a "surface" that lives on the inner wedge of the unit cube (points with 1-norm
@mathematicalninja2756
@mathematicalninja2756 3 года назад
@@tchlux that is a good perspective
@otaviodzb1
@otaviodzb1 3 года назад
One thing that I still couldn't understand is how backprop works in a transformer. Does someone have a good reference or video that explains it?
@pg1337ful
@pg1337ful 3 года назад
seems like you have fundamental gaps in ML.
@YannicKilcher
@YannicKilcher 3 года назад
It works like in any other neural network, by applying the chain rule to all involved operations
@KennedDansker
@KennedDansker 3 года назад
It is F because it is forward attention right? (Then it would fit with B being backward). It is not entirely right (A contain part of the forward attention) but I think that is the intention
@JamesAwokeKnowing
@JamesAwokeKnowing 3 года назад
The name was designed to sound like 'the nice transformer'. So leave the name as is.
@JamesAwokeKnowing
@JamesAwokeKnowing 3 года назад
So is that like a softmax over time, where it's valid kindof because over many iterations it's pulling random samples? Well hope a better way is found.
@BorrWick
@BorrWick 3 года назад
Didn't this came out like yesterday??
@ingusmant
@ingusmant 3 года назад
And?
@BorrWick
@BorrWick 3 года назад
@@ingusmant Just amazed by the speed Yannic can read, understand and produce these videos :o
@YannicKilcher
@YannicKilcher 3 года назад
You're right, it's already old now... ;)
@jonatan01i
@jonatan01i 3 года назад
I struggle to believe that it actually is named as Nyströmformer. I'll call it Nyströmer, as suggested and should be.
@ZedaZ80
@ZedaZ80 3 года назад
I have no idea what most of this means, but the lemma was funny
@kicckicc
@kicckicc 3 года назад
just fyi, I tried to implement this the day before yesterday, but got NAN. I checked the code and realized that the formula (14) isn't accurate and also Z0 = AS/(||AS ||1||AS ||∞) should be Z0 = transpose(AS)/(||AS ||1||AS ||∞).
@xiongyunyang9643
@xiongyunyang9643 3 года назад
You mean the NAN is from your own implementation or our implementation? The accuracy to approximate pseudoinverse using formula (14) depends on how many iterations. Z0 is AS^T/(||AS ||1||AS ||∞). We will fix the typo in our update.
@kicckicc
@kicckicc 3 года назад
@@xiongyunyang9643 thanks for the reply. After I used correct (14) and correct z0, the nan is gone. Just fyi, formula (16) is also inaccurate, but it is easy to be noticed.
@xiongyunyang9643
@xiongyunyang9643 3 года назад
@@kicckicc Cool. Formula (16), similar to average local pooling, is to compute landmarks efficiently.
@muhammadsaadmansoor7777
@muhammadsaadmansoor7777 3 года назад
I was not expecting this until a month later. But where do keys queries and value come from
@IRWBRW964
@IRWBRW964 3 года назад
They are learned.
@pratik245
@pratik245 2 года назад
Have you heard Michelle Srivastav but more probably you would hear Peter Chakraborty. If you can tell me the reason, you would know a lot about caste and region based targeting in India.
@pratik245
@pratik245 2 года назад
So, no body hates India when they are born, but, as you keep growing you see these divisions between people, majoritarianism, govt repression, targetting of intellectual class, poverty, corruption and then you start seeing trends in these concepts all in the name of highly preached American democracy and capitalism... But, Surely everything is a joke even misery.. Right Guys?
@Xrey56Cheyz
@Xrey56Cheyz 3 года назад
To be honest, I expected the Performer to be the ImageNet moment for transformers, but it seems there is still a long way to go and random Fourier features are not the best way to do the thing. Somewhat sad cause Performer's idea looked so cool and well grounded :(
@redjammie8342
@redjammie8342 3 года назад
Big leaps come through simple ideas like ReLU, convolution, drop-out, residual connections, self-attention... The moment an idea becomes too convoluted, it is less likely to be game changing.
@charlesfoster6326
@charlesfoster6326 3 года назад
What are you waiting for? If anything, the transformer revolution seems like it's come with even more force and speed than ImageNet.
@ahmadmoussa3771
@ahmadmoussa3771 3 года назад
*The NICEtrömer*
@NextFuckingLevel
@NextFuckingLevel 3 года назад
Indeed
@visionscaper
@visionscaper 3 года назад
Hi there!
@YannicKilcher
@YannicKilcher 3 года назад
hi!
@weizhu2230
@weizhu2230 3 года назад
OK i vote down for this work and i think the "asymmetric non local neural network for semantic segmentation" should be a better one.
@lennartvandergoten6592
@lennartvandergoten6592 3 года назад
Grüße an meinen alten ETH Kumpanen Yannic, richte Jonas schöne Grüße von mir aus :-)
@yaaank6725
@yaaank6725 3 года назад
In the last twitter chart, it's quite surprising that Performer has the worst performance across the other efficient transformers. Is this also verified by other tasks?
@yaaank6725
@yaaank6725 3 года назад
Or other people maybe..
@xiongyunyang9643
@xiongyunyang9643 3 года назад
We have released the scores on individual LRA tasks. It will be interesting to see how Performer works for other tasks beyond LRA tasks.
@Ronnypetson
@Ronnypetson 3 года назад
Noice
@CandidDate
@CandidDate 3 года назад
I'd bet a million dollars that AGI, when discovered, uses frequencies of waves rather than any matrices.
@kimchi_taco
@kimchi_taco 3 года назад
mathematically ugly but somehow works well. I don't feel good in that both Nyströmformer and Performer rely on random sampling.
@xiongyunyang9643
@xiongyunyang9643 3 года назад
No, Nyströmformer does not rely on random sampling.
Далее
Ajdarlar...😅 QVZ 2024
00:39
Просмотров 534 тыс.
The Fan’s Fang Skin🔥 | Brawl Stars Sneak Peek
00:16
This is why Deep Learning is really weird.
2:06:38
Просмотров 383 тыс.
GEOMETRIC DEEP LEARNING BLUEPRINT
3:33:23
Просмотров 178 тыс.