Тёмный

The math behind Attention: Keys, Queries, and Values matrices 

Serrano.Academy
Подписаться 146 тыс.
Просмотров 196 тыс.
50% 1

This is the second of a series of 3 videos where we demystify Transformer models and explain them with visuals and friendly examples.
Video 1: The attention mechanism in high level • The Attention Mechanis...
Video 2: The attention mechanism with math (this one)
Video 3: Transformer models • What are Transformer M...
If you like this material, check out LLM University from Cohere!
llm.university
00:00 Introduction
01:18 Recap: Embeddings and Context
04:46 Similarity
11:09 Attention
20:46 The Keys and Queries Matrices
25:02 The Values Matrix
28:41 Self and Multi-head attention
33:54: Conclusion

Наука

Опубликовано:

 

17 май 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 283   
@SerranoAcademy
@SerranoAcademy 8 месяцев назад
Hello all! In the video I made a comment about how the Key and Query matrices capture low and high level properties of the text. After reading some of your comments, I've realized that this is not true (or at least there's no clear reason for it to be true), and probably something I misunderstood while reading in different places in the literature and threads. Apologies for the error, and thank you to all who pointed it out! I've removed that part of the video.
@tantzer6113
@tantzer6113 8 месяцев назад
No worries. It might help to pin this comment to the top. Thanks a lot for the video.
@chrisw4562
@chrisw4562 3 месяца назад
Thanks for note. That comment actually sounds very reasonable to me. If I understand this right, keys and querys help to determine the context.
@Rish__01
@Rish__01 8 месяцев назад
This might be the best video on attention mechanisms on youtube right now. I really liked the fact that you explained matrix multplications with linear transformations. It brings a whole new level of understanding with respect to embedding space. Thanks a lot!!
@SerranoAcademy
@SerranoAcademy 8 месяцев назад
Thank you so much! I enjoy seeing things pictorially, especially matrices, and I'm glad that you do too!
@maethu
@maethu 5 месяцев назад
This is really great, thanks a lot!
@JosueHuaman-oz4fk
@JosueHuaman-oz4fk 2 месяца назад
That is what many disseminators lack: explaining things with the mathematical foundations. I understand that it is difficult to do so. However, you did it, and in an amazing way. The way you explained the linear transformation was epic. Thank you.
@JTedam
@JTedam 5 месяцев назад
I have watched more than 10 videos trying to wrap my head around the paper, attention is all you need. This video is by far the best video. I have been trying to assess why it is so effective at explaining such a complex concept and why the concept is hard to understand in the first place. Serrano explains the concepts, step by step, without making any assumptions. It helps a great deal. He also used diagrams, showing animations along the way as he explains. As for the architecture, there are so many layers condense in to the architecture. It has obviously evolved over the years with multiple concepts interlaced into the attention mechanism. so it is important to break it down into the various architectures and take each one at a time - positional encoding, tokenization, embedding, feed forward, normalization, neural networks, the math behind it, vectors, query-key -values. etc. Each of these are architectures that need explaining, or perhaps a video of their own, before putting them together. I am not quite there yet but this has improved my understanding a great deal. Serrano, keep up your approach. I would like to see you cover other areas such as Transformer with human feedback, the new Qstar architecture etc. You break it down so well.
@SerranoAcademy
@SerranoAcademy 5 месяцев назад
Thank you for such a thorough analysis! I do enjoy making the videos a lot, so I'm glad you find them useful. And thank you for the suggestions! Definitely RLHF and QStar are topics I'm interested in, so hopefully soon there'll be videos of those!
@blahblahsaurus2458
@blahblahsaurus2458 2 месяца назад
Did you also try reading the original Attention is All you Need paper, and if so, what was your experience? Was there too much jargon and math to understand?
@visahonkanen7291
@visahonkanen7291 Месяц назад
Agree, an excellelt öööököööööööövnp
@JTedam
@JTedam Месяц назад
@@blahblahsaurus2458 too much jargon obviously intended for those already Familiar with the concepts. The diagram appears upside down and not intuitive at all. Nobody has attempted to redraw the architecture diagram in the paper. It follows no particular convention at all.
@fcx1439
@fcx1439 3 месяца назад
this is definitely the best explained video for attention model, the original paper sucks because there is not intuition at all, just simple words and crazy math equations that I don't know what it's doing
@user-tl3ix3xf3j
@user-tl3ix3xf3j 8 месяцев назад
This is unequivocally the best introduction to Transformers and Attention Mechanisms on the entire internet. Luis Serrano has guided me all the way from Machine Learning to Deep Learning and onto Large Language Models, maximizing the entropy of my AI thinking, allowing for limitless possibilities.
@JonMasters
@JonMasters Месяц назад
💯 agree. Everything else is utter BS by comparison. I’ve never tipped someone $10 for a video before this one ❤
@puwanatsangkhapreecha7847
@puwanatsangkhapreecha7847 День назад
Best video explaining what the query, key, and value matrices are! You saved my day.
@computersciencelearningina7382
@computersciencelearningina7382 2 месяца назад
This is the best description of Keys, Query, and Values I have ever seen across the internet. Thank you.
@23232323rdurian
@23232323rdurian 8 месяцев назад
you explain very well Luis. Thank you. It's HARD to explain complicated topics in a way people can easily understand. You do it very well.
@SerranoAcademy
@SerranoAcademy 8 месяцев назад
Thank you! :)
@rohitchan007
@rohitchan007 6 месяцев назад
Please continue making videos. You're the best teacher on this planet.
@WhatsAI
@WhatsAI 8 месяцев назад
The best explanation I've seen so far! Really cool to see how much closer the field is getting to understanding those models instead of being so abstract thanks to people like you, Luis! :)
@__redacted__
@__redacted__ 5 месяцев назад
I really like how you're using these concrete examples and combining them with visuals. These really help build an intuition on what's actually happening. It's definitely a lot easier for people to consume than struggling with reading academic papers, constantly looking things up, and feeling frustrated and unsure. Please keep creating content like this!
@channel8048
@channel8048 8 месяцев назад
Just the Keys and Queries section is worth the watch! I have been scratching my head on this for an entire month!
@SerranoAcademy
@SerranoAcademy 8 месяцев назад
Thank you! :)
@guitarcrax127
@guitarcrax127 8 месяцев назад
Amazing video. pushed forward my understanding of attention by quite a few steps and helped me build an intuition for what’s happening under the hood. Eagerly waiting for the next one
@SeyyedMohammadLoghmanDastgheyb
@SeyyedMohammadLoghmanDastgheyb 8 месяцев назад
This is the best video that I have seen about the concept of attention! (I have seen more than 10 videos but none of them was like this.) Thank you so much! I am waiting for the next videos that you have promised! You are doing a great job!
@Chill_Magma
@Chill_Magma 8 месяцев назад
Honestly you are the best content creator for learning Machine learning and Deep learning in a visual and intuitive way
@lijunzhang2788
@lijunzhang2788 8 месяцев назад
Great explanation. I was waitinig for this after your first video on attention mechanism! Your are so talented in explaining things in easily understandable ways! Thank you for the effort put into this and keep up the great work!
@aravind_selvam
@aravind_selvam 8 месяцев назад
This video is, without a doubt, the best video on transformers and attention that I have ever seen.
@ChujiOlinze
@ChujiOlinze 8 месяцев назад
Thanks for sharing your knowledge freely. I have been waiting patiently. You add a different perspective that we appreciate. Looking forward to the 3rd video. Thank you!
@SerranoAcademy
@SerranoAcademy 8 месяцев назад
Thank you! So glad you like the videos!
@andresfeliperiostamayo7307
@andresfeliperiostamayo7307 15 дней назад
La mejor explicación que he visto sobre los Transformers. Gracias!
@joelegger2570
@joelegger2570 6 месяцев назад
These are the best videos so far I saw to understand how Transformer / LLM works. Thank you. I really like maths but it is good that you keep math simple that one don't loose the overview. You really have a talent to explain complex things in a simple way. Greets from Switzerland
@ganapathysubramaniam
@ganapathysubramaniam 6 месяцев назад
Absolutely the best set of videos explaining the most discussed topic. Thank you!!
@etienneboutet7193
@etienneboutet7193 8 месяцев назад
Great video as always ! Thank you so much for this quality content.
@TheMotorJokers
@TheMotorJokers 8 месяцев назад
Thank you, really good job on the visualization! They make the process really understandable.
@chiboreache
@chiboreache 8 месяцев назад
very nice and easy explanation, thanks!
@user-zq8bd7iz4e
@user-zq8bd7iz4e 8 месяцев назад
The best explanation l've ever seen about the attention mechanism, amazing
@alexrypun
@alexrypun 6 месяцев назад
Finally! This is the best from the tons of videos/articles I saw/read. Thank you for your work!
@dekasthiti
@dekasthiti Месяц назад
This really is one of the best videos explaining the purpose of K, Q, V. The illustrations provide a window into the math behind the concepts.
@brainxyz
@brainxyz 8 месяцев назад
Amazing explanation. Thanks a lot for your efforts.
@snehotoshbanerjee1938
@snehotoshbanerjee1938 6 месяцев назад
One of the Best video on Attention. Such a complex subject been taught in a simple manner.Thank u!
@MrMacaroonable
@MrMacaroonable 5 месяцев назад
this is absolutely the best video that clearly illustrate and explains why we need v,k,q in attention. Bravo!
@joshuaohara7704
@joshuaohara7704 7 месяцев назад
Amazing video! Took my intuition to the next level.
@RoyBassTube
@RoyBassTube 16 дней назад
Thanks! This is one of the best explanations of Q, K & V I've heard!
@awinashjha
@awinashjha 8 месяцев назад
This probably is “the best video “ on this topic
@januaymagori4642
@januaymagori4642 8 месяцев назад
Today i have understood attention mechanism better than never before
@devmum2008
@devmum2008 2 месяца назад
This is great videos with clarity! on Keys, Query, and Values. Thank you
@shuang7877
@shuang7877 7 дней назад
A professor here - preparing for my couse and tryng to find an easier way to talk about these ideas. I learned a lot! Thank you!
@johnschut164
@johnschut164 5 месяцев назад
Your explanations are truly great! You have even understood that you sometimes have to ‘lie’ first to be able to explain things better. My sincere compliments! 👊
@lengooi6125
@lengooi6125 4 месяца назад
Simply the best explanation on this subject.Crystal clear .Thank you
@deniz517
@deniz517 8 месяцев назад
The best video I have ever watched about this!
@kranthikumar4397
@kranthikumar4397 2 месяца назад
This is one of the best videos on attention and w,k,v so far.Thank you for a detailed explanation
@leilanifrost771
@leilanifrost771 2 месяца назад
Math is not my strong suit, but you made these mathematical concepts so clear with all the visual animations and your concise descriptions. Thank you so much for the hard work and making this content freely accessible to us!
@redmond2582
@redmond2582 5 месяцев назад
Amazing explanation of very difficult concepts. The best explanation I have found on the topic so far.
@sreelakshminarayanan.m6609
@sreelakshminarayanan.m6609 Месяц назад
Best Video to get clear understanding of transformers
@Chill_Magma
@Chill_Magma 8 месяцев назад
Excellent explaination
@sheiphanshaijan1249
@sheiphanshaijan1249 8 месяцев назад
Brilliant Explanation.
@SerranoAcademy
@SerranoAcademy 8 месяцев назад
Thank you! :)
@danielmoore4311
@danielmoore4311 6 месяцев назад
Excellent job! Please continue making videos that breakdown the math.
@antraprakash2562
@antraprakash2562 4 месяца назад
This is one of best video I've come across to understand embeddings, attention. Looking forward to more such explanations which can simplify such complex mechanisms in AI world. Thanks for your efforts
@MrSikesben
@MrSikesben 4 месяца назад
This is truly the best video explaining each stage of a transformer, thanks man
@EkShunya
@EkShunya 8 месяцев назад
Thank you it was a superb explanation 🤩
@bzaruk
@bzaruk 6 месяцев назад
MAN! I have no words! Your channel is priceless! thank you for everything!!!
@vasanthakumarg4538
@vasanthakumarg4538 4 месяца назад
This is the best video I had seen explaining attention mechanism. Keep up the good work!
@brandonheaton6197
@brandonheaton6197 8 месяцев назад
Amazing explanation. I am a professional pedagogue and this is stellar work
@chrisw4562
@chrisw4562 3 месяца назад
Thank you for the great tutorial. This is the clearest explanation I have found so far.
@rollingstone1784
@rollingstone1784 27 дней назад
@SerranoAcademy If you want to come to the same notation as in the mentioned paper, Q times K_transpose, than the orange is the query and the phone is the key here. The you calculate q times Q times K_transpose times key_transpose (as mentioned in the paper) Remark: the paper uses "sequences", described as a "row vectors". However, usually one uses column vectors. Using row vectors, the linear transformation is a left multiplication a times A and the dot product is written as a times b_transpose. Using column vectors, the linear transformation is A times a and the dot product is written as a_transpose times b. This, in my opinion, is the standard notation, e.g. to write Ax = b and not xA=b.
@naveensubramanian4876
@naveensubramanian4876 8 месяцев назад
Are these slides available somewhere for reference? It will be a great help. Thanks
@user-ff7fu3ky1v
@user-ff7fu3ky1v 7 месяцев назад
Great explanation. I just really needed the third video. Hope you will post it soon.
@ThinAirElon
@ThinAirElon 7 месяцев назад
This is Great ! in next video can you please include why we need sin and cosine functions for posistional encoding ? whats the intution behind it? if we add this vector to embedding vector what happens ?
@0xSingletOnly
@0xSingletOnly 4 месяца назад
I'm going to try implement self-attention and multi-head attention myself, thanks so much for doing this guide!
@fenggao5305
@fenggao5305 8 месяцев назад
I can't wait the third video any more, when it will appear?
@alnouralharin
@alnouralharin 2 месяца назад
One of the best explanations I have ever watched
@o.k.4599
@o.k.4599 2 месяца назад
I haven't blinked my eyes for a sec. 👏🏼🙏🏼
@sadiaafrinpurba9179
@sadiaafrinpurba9179 8 месяцев назад
Thank you for the explantion.
@OpenAITutor
@OpenAITutor 8 месяцев назад
You are a master!
@BABA-oi2cl
@BABA-oi2cl 5 месяцев назад
Thanks a lot for this. I always got terrified of the maths that might be there but the way you explained it all made it seem really easy ❤
@tankado_ndakota
@tankado_ndakota 18 дней назад
amazing video. that's what i looking for. I need to know mathematical background to understand what is happening behind. thank you sir!
@joehannes23
@joehannes23 5 месяцев назад
Great video finally understood all the concepts in their context
@kylelau1329
@kylelau1329 5 месяцев назад
I've been watching over 10 of the Transformers architecture tutorial videos, This one is so far the most intuitive way to understand it! really good work! yeah, Natural language processing is a hard topic, This tutorial is kind of revealed the black boxe from the large language model.
@MarkusEicher70
@MarkusEicher70 6 месяцев назад
HI Luis. Thank you for this video. I'm sure, this is a very good way to explain this complex topic, but I just won't get this into my brain. I'm currently doing the Math for Machine Learning specialization on Coursera and brushing up my algebra and calculus skills that are way to low. In any case, you made me getting involved into this and now I will grind through it till I make it. I'm sure the pain will become less and the fog will lighten up. 😊
@panagiotiskyriakis795
@panagiotiskyriakis795 3 месяца назад
Great and intutive explanations! Well done!
@shannawallace7855
@shannawallace7855 6 месяцев назад
I had to read this research paper for my Intro to AI class and it's obviously written for people who already have a lot of background knowledge in this field. so being a newbie I was so lost lol. Thanks for breaking it down and making it easy to understand!
@pavangupta6112
@pavangupta6112 5 месяцев назад
Very well explained. Got a bit closer to understanding attention models.
@Wise_Man_on_YouTube
@Wise_Man_on_YouTube 2 месяца назад
"This step is called softmax" . 😮😮😮 Today I understood why softmax is used. Such a beautiful function. And such a great way to demonstrate it.
@_ncduy_
@_ncduy_ 2 месяца назад
This is the best video for people trying to understand basic knowledge about transformer, thank you so much ^^
@user-eg8mt4im1i
@user-eg8mt4im1i 5 месяцев назад
Amazing video and explanations, thank you !!
@saintcodded2918
@saintcodded2918 3 месяца назад
This is powerful yet so simple. Thanks
@knobbytrails577
@knobbytrails577 5 месяцев назад
Best video on this topic so far!
@PeterGodek2
@PeterGodek2 5 месяцев назад
Best video so far on this topic
@Ludwighaffen1
@Ludwighaffen1 5 месяцев назад
Great video series! Thanks you! That helped a ton 🙂 One small remark: the concept of the "length" of a vector that you use here confused me. Here, I guess you take the point of view of a programmer: len(vector) outputs the number of dimensions of the vector. However, for a mathematician, the length of a vector is its norm or also called magnitude (square root of x^2 + y^2).
@MrMehrd
@MrMehrd 8 месяцев назад
Fast forward watched, seems to be good , thx will watch
@aldotanca9430
@aldotanca9430 6 месяцев назад
Thanks, very useful. I love the way you explain things here and on Coursera.
@EkShunya
@EkShunya 8 месяцев назад
the scaling factor in scaled dot product can be understood as the ~ dis(points). in higher dimentions the estimate of distance between two points in roughly srqt(dimentions)
@ayoubelmhamdi7920
@ayoubelmhamdi7920 8 месяцев назад
so great video
@deveshnandan323
@deveshnandan323 2 месяца назад
Sir , You are a Blessing to New Learners like me , Thank You , Big Respect.❤
@davidking545
@davidking545 6 месяцев назад
Thank you so much! the image at 24:29 made this whole concept click immediately.
@user-jz8hr5fo9e
@user-jz8hr5fo9e 18 дней назад
Great Explanation. Thank you so much
@SulkyRain
@SulkyRain 4 месяца назад
Love the simplification you brought !!! super
@celilylmaz4426
@celilylmaz4426 5 месяцев назад
This video has the best explanations of QKV matrices and linear layers among the other resources i ve come across. I don't know why but people seem not interested in explaining whats really happening with each action we take which results in loads of vague points. Yet, the video could ve been further improved with more concrete examples and numbers. Thank you.
@EdouardCarvalho82
@EdouardCarvalho82 8 месяцев назад
So great content. You really have a gift to distillate complex concepts! Thank you. Slight typo tho on the softmax (from 16:45), is vector-valued, so the numerator must be calculed in each component... in the slides the fractions cancel out to 1...
@SerranoAcademy
@SerranoAcademy 8 месяцев назад
Thank you for the kind words, and for the observation! Yes you're right, I'll keep that in mind for the next videos.
@cooperwu38
@cooperwu38 2 месяца назад
Super clear ! Great video !!
@danherman212nyc
@danherman212nyc 2 месяца назад
I studied linear algebra during the day on Coursera and watch RU-vid videos at night on state of the art machine learning. I’m amazed by how fast you learn with Luis. I’ve learned everything I was curious about. Thank you!
@SerranoAcademy
@SerranoAcademy 2 месяца назад
Thank you, it’s an honor to be part of your learning journey! :)
@poojapalod
@poojapalod 8 месяцев назад
This is one of the best resource on attention mechanism. Key, query, value explaination was just awesome. Just one question though how do we know that key matrix is capturing high level concept , query matrix capturing low level concepts and value matrix more focused on predicting next word. Can you share code or playbook which validates this?
@SerranoAcademy
@SerranoAcademy 8 месяцев назад
Thank you! That's a great question. I saw it in a thread somewhere, but I'm now doubting its validity. I may remove that part of the video. It may be better to look at K and Q as 'working together', rather than having separate functions. But if I find out more I'll post it here.
@rollingstone1784
@rollingstone1784 27 дней назад
@SerranoAcademy At 13:23, you show a matrix-vector multiplication with a column-vector (rows of the table times columns of the vector) by right-multiplication. On the right side, maybe you could use, additionally to "is sent to", the icon "orange' (orange prime). This would show the multiplication in a clearer way Remark: you use a matrix-vector multiplication here (using a row of the matrix and the words as a column on the right of the matrix). If you use row vectors, the the word vector should be placed horizontally on the left of the matrix and in the explanation, a column of the matrix has to be used. The result is then a row vector again (maybe a bit hard to sketch)
@alit5690
@alit5690 8 месяцев назад
Thanks for the first two videos; they are great! Where can we find the third video?
@SerranoAcademy
@SerranoAcademy 8 месяцев назад
Thank you, I’m so glad you like them! I’m building the third one, it’ll be out in 3-4 weeks
@alit5690
@alit5690 8 месяцев назад
@@SerranoAcademy Great! Thank you, we all look forward to it.
@DanteNoguez
@DanteNoguez 5 месяцев назад
Amazing. Thanks a lot for this!
@wiktormigaszewski8684
@wiktormigaszewski8684 4 месяца назад
Yep, a truly terrific video. Congrats!
@Hiyori___
@Hiyori___ 3 месяца назад
God sent video. So incredibly well put
@BrikeshKumar987
@BrikeshKumar987 4 месяца назад
Thank you so much !! I watched several video and none could explain the concept so well
@SerranoAcademy
@SerranoAcademy 4 месяца назад
Thanks, I'm so glad you enjoyed it! Lemme know if you have suggestions for more topics to cover!
@bonadio60
@bonadio60 2 месяца назад
As always, great content! Thanks
@Chill_Magma
@Chill_Magma 8 месяцев назад
Hello Dr Serrano . What is the reason when comparing the query of the ith word with the keys of other words rather than comparing with query of the ith word with the queries of other words? I understood that queries extract fine tuned embedded features and keys capture general concepts but still the different weight aspect confuse me . Is it because having two different weights allows more variability when doing linear transformations? Like also the same reason we use multiple gates with their respective weights to capture patterns about remembering and forgetting corect?
@Chill_Magma
@Chill_Magma 8 месяцев назад
I saw the video again. Turned out linear transformations job is to improve the embedding process by nearing relevant embeddings together and getting non-relevant embeddings further away . The different weights of queries and keys and the combination of them allows room for more complex linear transformations .
@SerranoAcademy
@SerranoAcademy 8 месяцев назад
@@Chill_MagmaYes absolutely! The K and Q matrices work together to make this embedding as good as possible, which means separating words, in order to move them around better.
@Chill_Magma
@Chill_Magma 8 месяцев назад
@@SerranoAcademy Thanks very much for your reply Dr Serrano
@alieskandarian5258
@alieskandarian5258 4 месяца назад
It was fascinating to me, I searched a lot for a math explained which didn't find thanks for this Please do more😅 with more complex ones
Далее
What are Transformer Models and how do they work?
44:26
The Attention Mechanism in Large Language Models
21:02
100❤️
00:19
Просмотров 2,7 млн
Я СКУФ!
06:12
Просмотров 1,2 млн
Transformers explained | The architecture behind LLMs
19:48
Attention is all you need explained
13:56
Просмотров 78 тыс.
This is why Deep Learning is really weird.
2:06:38
Просмотров 316 тыс.
Lecture 12.1 Self-attention
22:30
Просмотров 68 тыс.
НЕ ПОКУПАЙ iPad Pro
13:46
Просмотров 319 тыс.
Apple. 10 Интересных Фактов
24:26
Просмотров 86 тыс.