Тёмный

LSTM is dead. Long Live Transformers! 

Seattle Applied Deep Learning
Подписаться 11 тыс.
Просмотров 527 тыс.
50% 1

Leo Dirac (@leopd) talks about how LSTM models for Natural Language Processing (NLP) have been practically replaced by transformer-based models. Basic background on NLP, and a brief history of supervised learning techniques on documents, from bag of words, through vanilla RNNs and LSTM. Then there's a technical deep dive into how Transformers work with multi-headed self-attention, and positional encoding. Includes sample code for applying these ideas to real-world projects.

Наука

Опубликовано:

 

25 июл 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 296   
@vamseesriharsha2312
@vamseesriharsha2312 3 года назад
Good to see Adam Driver working on transformers 😁
@richardosuala9739
@richardosuala9739 4 года назад
Thank you for this concise and well-rounded talk! The pseudocode example was awesome!
@FernandoWittmann
@FernandoWittmann 4 года назад
That's one of the best deep learning related presentations I've seen in a while! Not only introduced transformers but also gave an overview of other NLP strategies, activation functions and also best practices when using optimizers. Thank you!!
@ahmadmoussa3771
@ahmadmoussa3771 4 года назад
I second this! The talk was such a joy to listen to
@aashnavaid6918
@aashnavaid6918 4 года назад
in about 30 minutes!!!!
@jackholloway7516
@jackholloway7516 Год назад
:¥£€€’
@jbnunn
@jbnunn Год назад
Agree -- I've watched half a dozen videos on transformers in the past 2 days, I wish I'd started with Leo's.
@8chronos
@8chronos 3 года назад
The best presentation/explanation to the topic I have seen so far. Thanks a lot :)
@_RMSG_
@_RMSG_ Год назад
I love this presentation Doesn't assume that the audience knows far more than is necessary, goes through explanations of relevant parts of Transformers, notes shortcomings, etc; Best slideshow I've seen this year, and it's from over 3 years ago
@briancase6180
@briancase6180 2 года назад
Thanks for this! It gets to the heart of the matter quickly and in an easy to grasp way. Excellent.
@BartoszBielecki
@BartoszBielecki Год назад
World deserve more lectures like this one. I don't need examples on how to tune U-net, but the overview of this huge research space and ideas underneath each group.
@monikathornton8790
@monikathornton8790 4 года назад
Great talk. It's always thrilling to see someone who actually knows what they're supposedly presenting.
@lmao4982
@lmao4982 Год назад
This is like 90% of what I remember from my NLP course with all the uncertainty cleared up, thanks!
@JagdeepSandhuSJC
@JagdeepSandhuSJC 3 года назад
Leo is an excellent professor. He explains difficult concepts in an easy-to-understand way.
@ismaila3347
@ismaila3347 4 года назад
This finally made it clear for me why RNNs have been introduced! thanks for sharing
@evennot
@evennot 4 года назад
I was trying to use similar super-low frequency sine trick for audio sample classification (to give network more clues about attack/sustain/release positioning). Never did I know, that one can use several of those in different phases. Such a simple and beautiful trick The presentation is awesome
@ehza
@ehza 2 года назад
This is beautiful. Clear and concise!
@Johnathanaa7
@Johnathanaa7 4 года назад
Best transformer presentation I’ve seen hands down. Nice job!
@jaypark7417
@jaypark7417 4 года назад
Thank you for sharing it. Really helpful!!
@ooio78
@ooio78 4 года назад
Wonderful and educational, value to those who need it!
@sarab9644
@sarab9644 3 года назад
Excellent presentation! Perfect!
@literallyjustsomegirl
@literallyjustsomegirl 4 года назад
Such a useful talk! TYSM 🤗
@ajitkirpekar4251
@ajitkirpekar4251 3 года назад
Its hard to overstate just how much this topic has(is) transformed the industry. As others have said, understanding it is not easy because there are a bunch of components that don't seem to align with one another and overall the architecture is such a departure from the most traditional things you are taught. I myself have wrangled with it for a while and its still difficult to fully grasp. Like any hard problem, you have to bang your head against it for a while before it clicks.
@JorgetePanete
@JorgetePanete 2 года назад
"has(is)"??
@asnaeb2
@asnaeb2 4 года назад
More vids please this was really informative on what actual SOTA is
@DavidWhite679
@DavidWhite679 4 года назад
This helped me a ton to understand the basics. Thanks!
@amortalbeing
@amortalbeing 3 года назад
This was fantastic. really well presented.
@Scranny
@Scranny 4 года назад
12:56 the review of the pseudocode of the attention mechanism was what finally helped me understand it (specifically the meaning of the Q,K,V vectors), what other videos were lacking. In the second outer for loop, I still don't fully understand why it loops over the length of the input sequence. The output can be of different length, no? Maybe this is an error. Also, I think he didn't mention the masking of the remaining output at each step so the model doesn't "cheat".
@Splish_Splash
@Splish_Splash Год назад
for every word we compute its query, key and value vectors, so we need to loop through our sequence
@dhariri
@dhariri 3 года назад
Excellent talk. Thank you @leopd !
@shivapriyakatta4885
@shivapriyakatta4885 4 года назад
One of the best talks on Deep Learning!...thank you
@dgabri3le
@dgabri3le 3 года назад
Thanks! Really good compare/contrasting.
@driziiD
@driziiD Год назад
very impressive presentation. thank you.
@MrDudugeda2
@MrDudugeda2 3 года назад
this is easily the best NLP talk ive heard this year
@jung-akim9157
@jung-akim9157 4 года назад
This is one of the clearest and most informative presentation about nlp models and their comparison. Thank you so much.
@bgundogdu16
@bgundogdu16 4 года назад
Great presentation!
@a_sun5941
@a_sun5941 2 года назад
Great Presentation!
@dalissonfigueiredo
@dalissonfigueiredo 4 года назад
What a great explanation, thank you.
@yangl1849
@yangl1849 2 года назад
hkj678aTY656S\]paxz dAESAZ RS
@georgejo7905
@georgejo7905 4 года назад
interesting looks a lot like my signal class. how to implement various filters on a dsp.
@matthewarnold3922
@matthewarnold3922 4 года назад
Excellent talk. Kudos!
@leromerom
@leromerom 4 года назад
Clear, precise, fluid thank you!
@SanataniAryavrat
@SanataniAryavrat 4 года назад
Wow... that was a quick summarization of all the NN research things in past many decades...
@thebanjak2433
@thebanjak2433 4 года назад
Well done and thank you
@timharris72
@timharris72 4 года назад
This is hands down the best presentation on LSTMs and Transformers I have ever seen. The speaker is really good. He knows his stuff.
4 года назад
Amazing presentation
@zeeshanashraf4502
@zeeshanashraf4502 Год назад
Great presentation.
@sainissunil
@sainissunil 2 года назад
This talk is awesome!
@thomaskwok8389
@thomaskwok8389 4 года назад
Clear and concise👍
@BcomingHIM
@BcomingHIM 4 года назад
All i want is his level of humbleness and knowledge
@pazmiki77
@pazmiki77 4 года назад
Don't just want, make it happen than. You could literally do this
@pi5549
@pi5549 3 года назад
Find the humility to get your head down and acquire the knowledge. Let the universe do the rest.
@Achrononmaster
@Achrononmaster 4 года назад
You folks need to look into asymptotics and Padé approximant methods, or for functions of many variables as ANN's are you'd use the generalize Canterbury Approximants. The is not yet a rigorous development in information theoretic terms, but Padé summations (essentially repeated fraction representations) are known to yield rapid convergence to correct limits for divergent Taylor series in non-converging regions of the complex plane. What this boils down to is that you only need a fairly small number of iterations to get very accurate results if you only require approximations. To my knowledge this sort of method is not being used in deep learning, but has been used by physicists in perturbation theory. I think you will find it extremely powerful in deep learning. Padé (or Canterbury) summation methods when generalized are a way of extracting information from incomplete data. So if you use a neural net to get a few first approximants, and assume they are modelling an analytically continued function, then you have a series (the node activation summation) you can Padé sum and extract more information than you'd be able to otherwise.
@MoltarTheGreat
@MoltarTheGreat 3 года назад
Amazing video, I feel like I actually have a more concrete grasp on how transformers work now. The only thing I didn't understand was the Positional Encoding but that's because I'm unfamiliar with signal processing.
@tastyw0rm
@tastyw0rm Год назад
This was more than meets the eye
@ChrisHalden007
@ChrisHalden007 Год назад
Great video. Thanks
@ThingEngineer
@ThingEngineer 3 года назад
This is by far the best video. Ever.
@FrancescoCapuano-ll1md
@FrancescoCapuano-ll1md Год назад
This is outstanding!
@cliffrosen5180
@cliffrosen5180 Год назад
Wonderfully clear and precise presentation. One thing that tripped me up, though, is this formula at 4 minutes in: Hi+1 = A(Hi, xi) Seems this should rather be: Hi+1 = A(Hi,xi+1) which might be more intuitively written as: Hi = A(Hi-1,xi)
@ProfessionalTycoons
@ProfessionalTycoons 4 года назад
RIP LSTM 2019, she/he/it/they would be remembered by....
@mohammaduzair608
@mohammaduzair608 4 года назад
Not everyone will get this
@dineshnagumothu5792
@dineshnagumothu5792 4 года назад
Still, LSTM works better with long texts. It has its own use cases.
@mateuszanuszewski69
@mateuszanuszewski69 4 года назад
@@dineshnagumothu5792 you obviously didn't get it. it is "DEAD", lol. RIP LSTM.
@gauravkantrod1205
@gauravkantrod1205 3 года назад
Amazing talk. It would be of great help if you can post link to the documents.
@aj-kl7de
@aj-kl7de 4 года назад
Great stuff.
@swe_fun
@swe_fun Год назад
This was amazing.
@favlcas
@favlcas 4 года назад
Great presentation. Thank you!
@anewmanvs
@anewmanvs 3 года назад
Very good presentation
@snehotoshbanerjee1938
@snehotoshbanerjee1938 3 года назад
Simply Wow!
@sanjivgautam9063
@sanjivgautam9063 4 года назад
For anyone feeling overwhelmed, it is completely reasonable, as this video is just a 28 minute recap for experienced machine learning practitioners, and lot of them are just spamming the top comments with "This is by far the best video", "Everything is clear with this single video" and all.
@adamgm84
@adamgm84 4 года назад
Sounds like it is my lucky day then, for me to jump from noob to semi-non-noob by gathering thinking patterns from more-advanced individuals. I will fill in the swiss cheese holes of crystallized intelligence later by extrapolating out from my current fluid intelligence level... or something like that. Sorry I'll see myself out.
@svily0
@svily0 4 года назад
I was about to make a remark about the presenter speaking like a machine gun at the start. I can't even follow such a pace even in my native language, on a lazy Sunday afternoon with a drink in my hand. Who cares what you say if no one manages to understand it??? Easy, easy boy... slow down, no one cares how fast you can speak, what matters is what you are able to explain. (so the others understand it).
@user-zw5rp7xx4q
@user-zw5rp7xx4q 3 года назад
@@svily0 >I can't even follow such a pace even in my native language maybe that's the issue?
@svily0
@svily0 3 года назад
@@user-zw5rp7xx4q Well, could as well be, but on the fringe side I have a masters degree. Could not be just that. ;)
@Nathan0A
@Nathan0A 3 года назад
This is by far the best comment, Everything is clear after reading this single comment! Thank you all
@oleksiinag3150
@oleksiinag3150 3 года назад
He is incredible One of the best presenters
@thusi87
@thusi87 3 года назад
Great summary! Wonder if you have a collection of talks you give on similar topics ?
@randomcandy1000
@randomcandy1000 Год назад
this is awesome!!! thank you
@cafeinomano_
@cafeinomano_ Год назад
Best Transformer explanation ever.
@BiranchiNarayanNayak
@BiranchiNarayanNayak 4 года назад
Very well explained... Love it.
@ramibishara5887
@ramibishara5887 4 года назад
where can I find the presentation doc of this talk amigos? thanks
@maciej2320
@maciej2320 6 месяцев назад
Four years ago! Shocking.
@BlockDesignz
@BlockDesignz 4 года назад
This is brilliant.
@Lumcoin
@Lumcoin 3 года назад
-sorry for the lack of technical terms- I did not completely get it how transformers work regarding to positional information: Isn't X_in the information of the previous hidden layer? That is not enough for the network, because the input embeddings lack any temporal/positional information, right? But why not just add one new linear temporal value to the embeddings instead of many sinewaves at different scale?
@davr9724
@davr9724 3 года назад
Amazing!!!
@rubenstefanus2402
@rubenstefanus2402 4 года назад
Great ! Thanks
@kjpcs123
@kjpcs123 4 года назад
A great introduction to transformers.
@Davourflave
@Davourflave 3 года назад
Very nice recap of Transformers and what sets them apart from RNNs! Just one little remark, you are not doing things in N^2 for the transformer since you fixed your N to be at maximum some sequence length. You can now set this N to be a much bigger number as GPUs have been highly optimized to do the according multiplications. However, for long sequence lengths, the quadratic nature of an all-to-all comparison is going to be an issue nonetheless.
@srikantachaitanya6561
@srikantachaitanya6561 3 года назад
Thank you...
@rp88imxoimxo27
@rp88imxoimxo27 3 года назад
Nice video but forced to watch on 2x speed trying not to fall asleep
@mikiallen7733
@mikiallen7733 4 года назад
Does the multi-headed attention + position encoding work equally well and better than plain vanilla LSTM but on numeric input ( float or integers ) vectors / tensors ? Your input is highly appreciated
@anoop5611
@anoop5611 3 года назад
Not an expert here, but the way attention works is closely tied to the way nearby words are relevant to each other: for example, a pronoun and it's relevant noun. Multi-headed attention would identify more such abstract relationships between words in a window. So if the numeric input seq has a set of consistent relationships among all its members, then attention would help embed more relational info on the input data so that processing it becomes easier when honouring this relational info.
@user-iw7ku6ml7j
@user-iw7ku6ml7j Год назад
Awesome!
@g3kc
@g3kc 2 года назад
great talk!
@lukebitton3694
@lukebitton3694 4 года назад
I've always wondered how standard Relu's can provide non-trivial learning if they are essentially linear for positive values? I know with standard linear activation functions any deep network can be reduced to a since layer transformation. Is it the discontinuity at zero that stops this being the case for Relu?
@lucast2212
@lucast2212 4 года назад
Exactly. Think of it like this. A matrix-vector multiplication is a linear transformation. That means it rotates and shifts its input vector. That is why you can write two of these operations as a single one (A_matrix * B_matrix * C_vec = D_matrix * C_vec) and also why you can add scalar multiplications in between (which is what linear activation would do, and is just a scaling operation on the vector). But if you only scale some of the entries of the vector (ReLu) that does not work anymore. If you take a pen, rotating and scaling it preservers your pen, but if you want to only scale parts of it, you have to break it.
@lukebitton3694
@lukebitton3694 4 года назад
@@lucast2212 Cheers! good explanation, thanks.
@FernandoWittmann
@FernandoWittmann 4 года назад
I have a question: is it possible to use those SoA models shared in the very end od the presentation as document embeddings? Something analog to doc2vec. My intent is to transform documents into vectors that well represents them and would allows me to compare the similarity of different documents
@LeoDirac
@LeoDirac 3 года назад
Absolutely yes.
@rohitdhankar360
@rohitdhankar360 10 месяцев назад
@10:30 - Attention is all you need -- Multi Head Attention Mechanism --
@SuilujChannel
@SuilujChannel 4 года назад
question regarding 26:27 so if i plan on analysing time series sensor data should i stick to LSTM or is the transformers model a good choice for time series data?
@isaacgroen3692
@isaacgroen3692 4 года назад
I could use an answer to this question as well
@akhileshrai4176
@akhileshrai4176 4 года назад
@@isaacgroen3692 Damn I have the same question
@abdulazeez7971
@abdulazeez7971 4 года назад
U need to use LSTM for time series. Bcos in transformers, it's all about attention or positional intelligence which has to be learnt. Whereas in time series, it's all about the trend and patterns which requires the model to remember a complete sequence of data points.
@SuilujChannel
@SuilujChannel 4 года назад
@@abdulazeez7971 thanks for the info :)
@Jason-jk1zo
@Jason-jk1zo 4 года назад
The primary advantages and benefits from the transformer are the attention and positional encoding, which are quite useful for translation because the grammar differences in different languages may cause the disorder of the input and output words. But for time series sensor data, they are not disordered (comparing output with input)! RNN, such as LSTM is a suitable choice to perform analysis for such data.
@welcomeaioverlords
@welcomeaioverlords 4 года назад
Well done--thanks!
@riesler3041
@riesler3041 3 года назад
Presentation: perfect Explanation: perfect me (every 10 mins): " but that belt tho... ehh PERFECT!"
@VishalSingh-dl8oy
@VishalSingh-dl8oy 3 года назад
Thankyou!
@bryancc2012
@bryancc2012 4 года назад
good video!
@maloukemallouke9735
@maloukemallouke9735 3 года назад
Thank's so much for video . can'i ask some one if he know where i can find a pre-trainded modele to identfiy number in Image that are from 0 to 100. No writied by hand specialy and can be any where position in image ? Thank's for adavance.
@BoersenCrashKurs
@BoersenCrashKurs 2 года назад
When I want to use transformers for time series analysis while the dataset includes individual specific effects. What do I do? In this case the only possibility would be to match the batch size with the length of the individual data length? Right?
@LeoDirac
@LeoDirac 2 года назад
No, batch and time will be different tensor dimensions. If your dataset has 17 features, and the length is 100 time steps, then your input tensor might be 32x100x17 with a batch size of 32.
@juliawang3131
@juliawang3131 2 года назад
impressive!
@bruce-livealifewewillremem2663
@bruce-livealifewewillremem2663 3 года назад
Dude, can you share your PPT or PDF. Thanks in advance!
@xruan6582
@xruan6582 4 года назад
20:00 If I multiply a small scaling factor λ₁ (e.g. 0.01) to the output before feeding to activation function, sigmoid will be sensitive to difference between, say, 5 and 50. Similarly, if I multiply another scaling factor λ₂ (e.g. 100) to the sigmoid output, I can get activated output ranging between 0 and 100. Is that a better solution than Relu, which has no cap at all?
@LeoDirac
@LeoDirac 4 года назад
The problem with that approach is that in the very middle of the range the sigmoid is almost entirely linear - for input near zero, the output is 0.5 + x/4. And neural networks need nonlinearity in the activation to achieve their expressiveness. Linear algebra tells us that if you have a series of linear layers they can always and exactly be compressed down to a single linear layer, which we know isn't a very powerful neural net.
@xruan6582
@xruan6582 4 года назад
@@LeoDirac Relu is linear from 0 to ∞
@LeoDirac
@LeoDirac 3 года назад
@@xruan6582 Right! That's the funny thing about ReLU - it either "does nothing" (leaves the input the same) or it "outputs nothing" (zero). But by sometimes doing one and sometimes doing the other, it is effectively making a logic decision for every neuron based on the input value, and that's enough computational power to build arbitrarily complex functions. If you want to follow the biological analogy, you can fairly accurately say that each neuron in a ReLU net is firing or not, depending on whether the weighted sum of its inputs exceeds some threshold (either zero, or the bias if your layer has bias). And then a cool thing about ReLU is that they can fire weakly or strongly.
@joneskiller8
@joneskiller8 6 месяцев назад
I need that belt.
@jaynsw
@jaynsw 4 года назад
great talk
@yashbhambhu6633
@yashbhambhu6633 4 года назад
perfection
@mongojrttv
@mongojrttv 3 года назад
Was curious about machine learning and feel like I'm getting a lesson on how to speak in heirogliyphs.
@jeffg4686
@jeffg4686 5 месяцев назад
Relevance is just how often a word appears in the input? NM on this. I looked it up. The answer is similarity of tokens in the embedding - ones with higher similarity gets more relevance.
@Rhannmah
@Rhannmah 3 года назад
6:41 hahaha this is GODLIKE! The fact that Schmidhuber is on there makes the joke even better!
@user-du5mf6pr9v
@user-du5mf6pr9v 3 года назад
Great! thx
@DeltonMyalil
@DeltonMyalil 6 месяцев назад
This aged like fine wine.
@beire1569
@beire1569 Год назад
ooooh I so want to see a documentary about this ==> @25:20
@stevelam5898
@stevelam5898 Год назад
I had a tutorial few hours ago on how to build an LSTM network using TF only, left me feeling completely stupid. Thank you for showing there is a better way.
@DrummerBoyGames
@DrummerBoyGames 3 года назад
Excellent vid, am wondering about a point made around 22:00 about SGD being "slow but gives great results." I was under the impression that SGD was generally considered pretty OK w/r/t speed, especially compared to full gradient descent? Maybe it's slow compared to Adam I guess, or in this specific use-case it's slow? Perhaps I'm wrong. Anyways, thanks for the vid!
@LeoDirac
@LeoDirac 3 года назад
I was really just comparing SGD vs Adam there. Adam is usually much faster than SGD to converge. SGD is the standard and so a lot of optimization research has tried to produce a faster optimizer. Full batch gradient descent is almost never practical in deep learning settings. That would require a "minibatch size" equal to your entire dataset, which would require vast amounts of GPU RAM unless your dataset is tiny. FWIW, full batch techniques can actually converge fairly quickly, but it's mostly studied for convex optimization problems, which neural networks are not. The "noise" introduced by the random samples in SGD is thought to be very important to help deal with the non-convexity of the NN loss surface.
Далее
Geometric Intuition for Training Neural Networks
30:21
Let's build GPT: from scratch, in code, spelled out.
1:56:20
CS480/680 Lecture 19: Attention and Transformer Networks
1:22:38
Deep Learning State of the Art (2020)
1:27:41
Просмотров 1,3 млн
ОБСЛУЖИЛИ САМЫЙ ГРЯЗНЫЙ ПК
1:00