Тёмный

Transformers explained | The architecture behind LLMs 

AI Coffee Break with Letitia
Подписаться 49 тыс.
Просмотров 25 тыс.
50% 1

Опубликовано:

 

26 сен 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 107   
@YuraCCC
@YuraCCC 8 месяцев назад
Thanks for the explanation. At 9:19 : Shouldn't the order of multiplication be the opposite here? E.g. x1(vector) * Wq(matrix) = q1(vector). Otherwise I don't understand how we get the 1x3 dimensionality at the end
@AICoffeeBreak
@AICoffeeBreak 8 месяцев назад
Oh, shoot, messed up the order in the animations there. You are right. Sorry, pinning your comment.
@YuraCCC
@YuraCCC 8 месяцев назад
No problem thanks for clarifying that, and thanks again for the great video@@AICoffeeBreak
@scifaipy9301
@scifaipy9301 3 месяца назад
The vectors should be column vectors.
@uw10isplaya
@uw10isplaya 3 месяца назад
Had to go back and rewatch a section after I realized I'd been spacing out staring at the coffee bean's reactions.
@AICoffeeBreak
@AICoffeeBreak 2 месяца назад
@Clammer999
@Clammer999 4 месяца назад
Thanks so much for this video. I’ve gone through a number of videos on transformers and this is much easier to grasp and understand for a non-data scientist like myself.
@AICoffeeBreak
@AICoffeeBreak 4 месяца назад
You're very welcome!
@heejuneAhn
@heejuneAhn 3 месяца назад
BEST of BEST Explanation. 1) Visually, 2) intuitively, 3) by numerical examples. And your English is better than native for Foreigners to listen.
@MachineLearningStreetTalk
@MachineLearningStreetTalk 8 месяцев назад
Epic as always 🤌
@AICoffeeBreak
@AICoffeeBreak 8 месяцев назад
Thanks, Tim!
@Thomas-gk42
@Thomas-gk42 8 месяцев назад
Understood about 10%, but I like these vidoes and feel intuitively the usefulness.
@AICoffeeBreak
@AICoffeeBreak 8 месяцев назад
@mccartym86
@mccartym86 7 месяцев назад
I think I had at least 10 aha moments watching this, and I've watched many videos on these topics. Incredible job, thank you!
@AICoffeeBreak
@AICoffeeBreak 7 месяцев назад
Wow, thank You for this wonderful comment!
@DaveJ6515
@DaveJ6515 8 месяцев назад
You know how to explain things. This one is not easy: I can see the amount of work that went into this video, and it was a lot. I hope that your career takes you where you deserve.
@AICoffeeBreak
@AICoffeeBreak 8 месяцев назад
Thanks for watching and thanks for the kind words. All the best to you as well!
@xxlvulkann6743
@xxlvulkann6743 5 месяцев назад
This is a very well-made explanation. I hadn't known that the feedforward layers only received one token at a time. Thanks for clearing that up for me! 😁
@l.suurmeijer1382
@l.suurmeijer1382 8 месяцев назад
Absolute banger of a video. Wish I had seen this when I was learning about transformers in uni last year :-)
@AICoffeeBreak
@AICoffeeBreak 8 месяцев назад
Haha, glad I could help. Even if a bit late.
@M4ciekP
@M4ciekP 8 месяцев назад
How about a video explaining SSMs?
@AICoffeeBreak
@AICoffeeBreak 8 месяцев назад
✍️
@AICoffeeBreak
@AICoffeeBreak 7 месяцев назад
Psst: This will be the video coming up in a few days. it's in editing right now.
@M4ciekP
@M4ciekP 7 месяцев назад
Yaay! @@AICoffeeBreak
@phiphi3025
@phiphi3025 8 месяцев назад
Thanks, you helped so much explain Transformers to my PhD advisors
@AICoffeeBreak
@AICoffeeBreak 8 месяцев назад
This is really funny. In what field are you doing your PhD? 😅
@tildarusso
@tildarusso 8 месяцев назад
As far as I am aware, word embedding has changed from legacy static embedding like Word2Vec/GLOVE (like the famous queen=woman+king-man metaphor) to BPE & unigram, this change gave me quite a headache, as most of paper do not mention any detail of their "word embedding". Perhaps Letitia you can make a video to clarify this a bit for us.
@AICoffeeBreak
@AICoffeeBreak 8 месяцев назад
Great suggestion, thanks!
@SamehSyedAjmal
@SamehSyedAjmal 8 месяцев назад
Thank you for the video! Maybe an explanation on the Mamba Architecture next?
@AICoffeeBreak
@AICoffeeBreak 7 месяцев назад
The Mamba and SSM beans are roasting as we speak.
@rahulrajpvr7d
@rahulrajpvr7d 8 месяцев назад
Tomorrow i have thesis evaluation and i was thinking about watching that video again, but youtube algorithm suggested me without searching anything, Thank u youtube algo.. 😅❤🔥
@AICoffeeBreak
@AICoffeeBreak 8 месяцев назад
It read your mind.
@cosmic_reef_17
@cosmic_reef_17 8 месяцев назад
Thank you very much for the very clear explanations and detailed analysis of the transformer architecture. Your truly the 3blue1brown of machine learning!
@AICoffeeBreak
@AICoffeeBreak 8 месяцев назад
@manuelafernandesblancorodr6366
@manuelafernandesblancorodr6366 7 месяцев назад
What a wonderful video! Thank you so much for sharing it!
@AICoffeeBreak
@AICoffeeBreak 7 месяцев назад
Thank you too for this wonderful comment!
@DatNgo-uk4ft
@DatNgo-uk4ft 8 месяцев назад
Great Video!! Nice improvement over the original
@AICoffeeBreak
@AICoffeeBreak 8 месяцев назад
Glad you think so!
@jcneto25
@jcneto25 8 месяцев назад
Best didatic explanation about Transformers so far. Thank you for sharing it.
@AICoffeeBreak
@AICoffeeBreak 7 месяцев назад
Wow, thanks! Glad it's helpful.
@abhishek-tandon
@abhishek-tandon 8 месяцев назад
One of the best videos on transformers that I have ever watched. Views 📈
@AICoffeeBreak
@AICoffeeBreak 8 месяцев назад
Do you have examples of others you liked?
@connor-shorten
@connor-shorten 8 месяцев назад
Awesome! Epic Visuals!
@AICoffeeBreak
@AICoffeeBreak 8 месяцев назад
Thanks, Connor!
@dannown
@dannown 8 месяцев назад
Really appreciate this video.
@AICoffeeBreak
@AICoffeeBreak 8 месяцев назад
So glad!
@darylallen2485
@darylallen2485 5 месяцев назад
Letitia, you're awesome and I look forward to learning more from you.
@xyphos915
@xyphos915 8 месяцев назад
Wow, this explanation on the difference between RNNs and Transformers at the end is what I was missing! I've always heard that Transformers are great because of parallelization but never really saw why until today, thank you! Great video!
@AICoffeeBreak
@AICoffeeBreak 8 месяцев назад
Oh, this makes me happy !
@muhammedaneesk.a4848
@muhammedaneesk.a4848 8 месяцев назад
Thanks for the explanation 😊
@AICoffeeBreak
@AICoffeeBreak 8 месяцев назад
Thanks for watching!
@tomoki-v6o
@tomoki-v6o 8 месяцев назад
well explained . as you promised
@AICoffeeBreak
@AICoffeeBreak 8 месяцев назад
@DerPylz
@DerPylz 8 месяцев назад
Wow, you've come a long way since your first transformer explained video!
@mumcarpet109
@mumcarpet109 8 месяцев назад
your videos has helped visual learner like me so much, thank you
@AICoffeeBreak
@AICoffeeBreak 8 месяцев назад
Happy to hear that!
@16876
@16876 8 месяцев назад
What a thorougfh and much anticipated overview laid out so coherently ,, thank you
@AICoffeeBreak
@AICoffeeBreak 8 месяцев назад
Our pleasure! We should have done this video much earlier, considering that our old Transformer Explained is our most watched video to date. 😅
@paprikar
@paprikar 8 месяцев назад
here we go! TY for content
@HarishAkula-df8gs
@HarishAkula-df8gs 5 месяцев назад
Amazing explanation, Thank you! Just discovered your channel and I really like how the difficult topics are demystified.
@AICoffeeBreak
@AICoffeeBreak 5 месяцев назад
Thanks a lot!
@jonas4223
@jonas4223 8 месяцев назад
Today, I had the problem I need to understand how Transformers work.. I searched on youtube and found your video 20 minutes after release. What a perfect timing
@AICoffeeBreak
@AICoffeeBreak 8 месяцев назад
What a timing!
@bartlomiejkubica1781
@bartlomiejkubica1781 7 месяцев назад
Thank you! Finally, I start to get it...
@GarySuffield-w9p
@GarySuffield-w9p 8 месяцев назад
Really well done and easy to follow, thank you
@AICoffeeBreak
@AICoffeeBreak 8 месяцев назад
Glad you enjoy it!
@volpir4672
@volpir4672 8 месяцев назад
that's great, I'm a little stuck on the special mask token? ... I'll keep digging, good info, the video is good explanation, it allows for more experimentation instead of relying on open source models that can have components look like a black box to noobs like me :)
@supanutsookkho2749
@supanutsookkho2749 2 месяца назад
Great video and a good explanation. Thanks for your hard work on this amazing video!!
@AICoffeeBreak
@AICoffeeBreak 2 месяца назад
Glad you liked it!
@Ben_D.
@Ben_D. 6 месяцев назад
...ok. After binging some of your vids, I now need to go make coffee. 😆
@AICoffeeBreak
@AICoffeeBreak 6 месяцев назад
Please do!
@ArthasDKR
@ArthasDKR 8 месяцев назад
Excellent explanation. Thank you!
@AICoffeeBreak
@AICoffeeBreak 8 месяцев назад
@partywen
@partywen 3 месяца назад
Super informative and helpful! Thanks a lot!
@AICoffeeBreak
@AICoffeeBreak 3 месяца назад
Oh wow, thanks!
@zahrashah6567
@zahrashah6567 5 месяцев назад
What a wonderful explanation😍 Just discovered your channel and absolutely loving the explanations as well as visuals😘
@AICoffeeBreak
@AICoffeeBreak 5 месяцев назад
Thank you! welcome!
@Jeshhhhhh
@Jeshhhhhh Месяц назад
Oh my goddess in disguise, I thank you for saving me from depths of hell. Lots of love
@AICoffeeBreak
@AICoffeeBreak Месяц назад
Glad to help. 😆
@pfever
@pfever 7 месяцев назад
Just discovered your channel and this is great! Thank you! :D
@AICoffeeBreak
@AICoffeeBreak 7 месяцев назад
Thank you! Hope to see you again soon in the comments.
@MuruganR-tg9yt
@MuruganR-tg9yt 7 месяцев назад
Thank you. Nice explanation 😊
@AICoffeeBreak
@AICoffeeBreak 7 месяцев назад
Thank You for your visit!
@zbynekba
@zbynekba 8 месяцев назад
❤ Letitia, thank you for great visualization and intuition. For inspiration: In the original paper, the decoder utilizes the output of the encoder by running a cross-attention process. Why does GPT not use an encoder? As you've mentioned, the encoder is typically used for classification, while the decoder is for text generation. They are never used in combination. Why is this the case? Missing Intuition: Why does the cross-attention layer inside the decoder take the values from the ENCODER’s output to create the enhanced embeddings (as a weighted mix)? Intuitively, I would use the values from the DECODER.
@AICoffeeBreak
@AICoffeeBreak 8 месяцев назад
Thanks for your thoughts! Encoders are sometimes used in combination with decoders, right? The most famous example is the T5 architecture.
@zbynekba
@zbynekba 8 месяцев назад
Thanks for your prompt reply. Hence, understanding the concept and intuition behind feeding the encoder output into the decoder is essential. I found only this one video on encoder-decoder cross-attention: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-Dqjq4Gxdhng.htmlsi=gtLzNxAU0pUGyLvk In it, Lennart emphasizes the observation that, based on the original equations, we have the enhanced embeddings calculated as a weighted sum of ENCODER values. Inside of a DECODER, I would rather expect to have the DECODER values pass through. Letitia, I am sure, you will resolve this mystery. 🍀
@l3nn13
@l3nn13 8 месяцев назад
great video
@AICoffeeBreak
@AICoffeeBreak 8 месяцев назад
Thanks for the visit and for leaving the comment!
@LEQN
@LEQN 6 месяцев назад
Awesome video :) thanks!
@AICoffeeBreak
@AICoffeeBreak 6 месяцев назад
Thank you for watching and for your wonderful comment!
@420_gunna
@420_gunna 8 месяцев назад
Awesome video, thank you! I love the idea of you revisiting older topics -- either as a 201 or as a re-introduction. "Attention combines the representation of input vector's value vectors, weighted by the importance score (computed by the query and key vectors)."
@AICoffeeBreak
@AICoffeeBreak 8 месяцев назад
Thanks for your appreciation!
@kallamamran
@kallamamran 8 месяцев назад
Phew 😳
@ehudamitai
@ehudamitai 8 месяцев назад
In 11:14, the weighted sum is the sum of 3 vectors of 3 elements each, but the results is a vector of 4 elements. Which, conveniently, is the same size as the input vector. Could there be a missing step there?
@AICoffeeBreak
@AICoffeeBreak 8 месяцев назад
Yes, there is a missing back transformation to 4 dimensions I skipped. :) Well spotted!
@benjamindilorenzo
@benjamindilorenzo 7 месяцев назад
What a great video. It still could expand more and really sum up every sub-part and connect it to a certain clear visualization or clear step of what happens with the information at each time step and how its "transformation" progresses over time. So i think you could redo this video and really make it monkey proof for folks like me. But beware, if you look for example at the StatQuest version, its to slow and too repetative and also does not really capture, what really goes on inside the Transformer, once all steps are stacked together. Great work!
@heejuneAhn
@heejuneAhn 2 месяца назад
Thanks for your video. I have a question on inference process. For example when I have a input prompt of 2 tokens = {t1, t2}. we will get the output {o1, o2, o3}. we will take only o3 and make new input sequence {t1, t2, o3}. Then we will get another output {o'1, o'2, o'3, o'4}. Here my questions are. When we use causal masking for attention, o1= o'1 and o2=o'2 and so on? Another question, even though the mask guarantee the causal attension. but still the matrix calcuation is performed. Then it means the computation is used any way. How can we reduce the computation resource for this case.
@DaeOh
@DaeOh 8 месяцев назад
Everything makes sense except multiple attention heads. Each layer has only one set of Q, K, V, O matrices. But 8 attention heads per layer? I want to understand that.
@AICoffeeBreak
@AICoffeeBreak 8 месяцев назад
Think about it this way: In one layer, instead of having one head telling you how to pay attention at things, you have 8. In other words, instead of having one person shout at you the things they want you to pay attention to, you have 8 people simultaneously shouting at you. This is beneficial because it has an ensembling effect (the effect of a voting parliament. Think of Random Forests that are an ensemble of Decision Trees). I do not know if this helps, but I thought giving it another shot at explaining this.
@nmfhlbj
@nmfhlbj 6 месяцев назад
hi! can i ask question of how did you get the dimension (d)? because all i know is dimension can be found in square matrices, and the dot product of the attention formula says that Q•K^T. if we're using 1x3 matrices, we'll get 1x1 matrices or 1 dimension, how do you get 3 ? unless its 3x1 matrix beforehand, so we'll get 3x3 or 3 dimensional matrix. thankyouu !
@AICoffeeBreak
@AICoffeeBreak 6 месяцев назад
Hi, if you mean the mistake at 10:00, then the problem is that I have written matrix times vector when I should have written vector times matrix! (or I could have used column vectors instead of row vectors). Is this what you mean?
@josephvanname3377
@josephvanname3377 8 месяцев назад
I want to train a transformer that eats a row of matrices instead of just a row of vectors.
@ai-interview-questions
@ai-interview-questions 8 месяцев назад
Thank you, Letitia!
@AICoffeeBreak
@AICoffeeBreak 7 месяцев назад
Our pleasure!
@davidespinosa1910
@davidespinosa1910 9 дней назад
Time is quadratic, but memory is linear -- see the FlashAttention paper. But the number of parameters is constant -- that's the magic ! Thanks for the excellent videos ! 👍
Далее
Новый вид животных Supertype
00:59
Просмотров 74 тыс.
The moment we stopped understanding AI [AlexNet]
17:38
MAMBA and State Space Models explained | SSM explained
22:27
The Most Important Algorithm in Machine Learning
40:08
Просмотров 435 тыс.
How might LLMs store facts | Chapter 7, Deep Learning
22:43
Vision Transformer Basics
30:49
Просмотров 26 тыс.
The Turing Lectures: The future of generative AI
1:37:37
Просмотров 604 тыс.