Тёмный

Vision Transformer Basics 

Samuel Albanie
Подписаться 19 тыс.
Просмотров 19 тыс.
50% 1

An introduction to the use of transformers in Computer vision.
Timestamps:
00:00 - Vision Transformer Basics
01:06 - Why Care about Neural Network Architectures?
02:40 - Attention is all you need
03:56 - What is a Transformer?
05:16 - ViT: Vision Transformer (Encoder-Only)
06:50 - Transformer Encoder
08:04 - Single-Head Attention
11:45 - Multi-Head Attention
13:36 - Multi-Layer Perceptron
14:45 - Residual Connections
16:31 - LayerNorm
18:14 - Position Embeddings
20:25 - Cross/Causal Attention
22:14 - Scaling Up
23:03 - Scaling Up Further
23:34 - What factors are enabling effective further scaling?
24:29 - The importance of scale
26:04 - Transformer scaling laws for natural language
27:00 - Transformer scaling laws for natural language (cont.)
27:54 - Scaling Vision Transformer
29:44 - Vision Transformer and Learned Locality
Topics: #computervision #ai #introduction
Notes:
This lecture was given as part of the 2022/2023 4F12 course at the University of Cambridge.
It is an update to a previous lecture, which can be found here: • Neural network archite...
Links:
Slides (pdf): samuelalbanie.com/files/diges...
References for papers mentioned in the video can be found at
samuelalbanie.com/digests/2023...
For related content:
- Twitter: / samuelalbanie
- personal webpage: samuelalbanie.com/
- RU-vid: / @samuelalbanie1

Опубликовано:

 

19 июн 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 37   
@rldp
@rldp 5 месяцев назад
This is one of the best explanations of not just ViT, but transformers in general that I have watched. Excellent video
@srinjoy.bhuiya
@srinjoy.bhuiya 8 дней назад
One of the greatest explanations of the concepts of transformers to a Computer Vision Reserach
@whale27
@whale27 6 месяцев назад
Unbelievable quality. Happy to be here before this channel blows up.
@SamuelAlbanie1
@SamuelAlbanie1 6 месяцев назад
Thanks!
@capsbr2100
@capsbr2100 3 месяца назад
Goodness, what a remarkable video. This is by far the best explanation video I have watched about vision transformers.
@thetechnocrack
@thetechnocrack 4 месяца назад
This is one of the cleanest explanation of ViTs I have come across. Amazing work Samuel! Inspiring.
@continuallearning8366
@continuallearning8366 6 месяцев назад
Excellent video! Honored to be here before it goes viral 🙏🏾
@piclkesthedrummer6439
@piclkesthedrummer6439 25 дней назад
This is by far one of the most accurate, yet understandable and intuitive explaination of such a hard concept, you did a better job at explaining it than the authors! very impressive!
@jesusalpaca7170
@jesusalpaca7170 3 месяца назад
for a beginner like me, I would say, this is the introduce video that we were waiting for :')
@PotatoKaboom
@PotatoKaboom 6 месяцев назад
I've held guest lectures on the inner workings of transformers myself, but I still learned a bunch from this! Everything after 22:15 was very exciting to watch, very well presented and easy to understand! Very well done, I dubscribed for more :)
@user-iy6gq8yd3p
@user-iy6gq8yd3p 6 месяцев назад
Thank you for making this wonderful video. So clear! Please continue your awesome video work!
@thecheekychinaman6713
@thecheekychinaman6713 3 месяца назад
I was studying up on Transformers and ViTs half a year ago, and recently checked back to find this (to my surprise). Great clear explanations, can tell CAML is in great hands!
@abhimanyuyadav2685
@abhimanyuyadav2685 6 месяцев назад
Your weekly ai news was really useful Please bring it back
@MdAkmolMasud
@MdAkmolMasud 16 дней назад
The best explanation of ViT..
@aminkarimi1068
@aminkarimi1068 Месяц назад
The best video to easily understand VIT
@user-fv5oj4qk1l
@user-fv5oj4qk1l 6 месяцев назад
🎯 Key Takeaways for quick navigation: 00:00 🧠 *The Evolution of AI and Computer Vision* - General methods leveraging computation prove most effective in AI development. - Evolution from handcrafted features to Convolutional Neural Networks (CNNs) and then to Transformers, showcasing a reduction in inductive biases and an increase in data-driven approaches. 01:09 🤖 *Neural Network Architectures* - Importance of network architecture in building intelligent machines. - Distinction between network architecture and network parameters, focusing on resource limitations and efficient design. 02:32 💡 *Introduction to Transformers* - Transformers' dominance in AI, initially in Natural Language Processing (NLP) and then in Computer Vision. - Discussion on why Transformers took time to transition from NLP to Computer Vision. 03:57 🌐 *Understanding Transformers: Encoder and Decoder* - Explanation of the Transformer architecture with its encoder and decoder components. - Different variants of Transformers: Encoder-only, Decoder-only, and Encoder-Decoder architectures. 05:33 🔍 *Applying Transformers to Computer Vision* - Vision Transformers (ViT) process images by slicing them into patches, using position embeddings and Transformer encoders. - The methodology of transforming images into a sequence of embeddings for the Transformer encoder. 07:08 🔗 *Multi-Head Attention in Transformers* - Detailed explanation of the multi-head attention mechanism in Transformers. - Role of queries, keys, and values in facilitating communication between different embeddings. 09:12 🧩 *Transformer Encoder Blocks and Scaling* - The structure and function of Transformer encoder blocks, including multi-head attention and MLP. - Importance of residual connections and layer normalization in optimizing Transformer models. 11:05 🚀 *Scaling and Hardware Influence in AI* - The impact of scaling and hardware advancements on Transformer model performance. - Discussion on the exponential increase in computational resources for training large models. 13:50 🛠 *MLP and Optimization in Transformers* - Role of the multi-layer perceptron (MLP) in Transformer architecture for independent processing of embeddings. - Importance of non-linearities like ReLU and GELU in Transformer models. 15:00 ⚙️ *Residual Connections and Layer Normalization* - Implementation and significance of residual connections and layer normalization in Transformers. - These components facilitate gradient flow and stable learning in deep network training. 17:05 🌐 *Positional Embeddings in Transformers* - Explanation of positional embeddings in Transformers, necessary for maintaining spatial information in sequences. - Different methods of implementing positional embeddings in Transformer models. 19:27 🔄 *Cross Attention and Causal Attention in Transformers* - Discussion of Made with HARPA AI
@EigenA
@EigenA 3 месяца назад
Great work!
@sbdzdz
@sbdzdz 6 месяцев назад
Very well presented!
@gnorts_mr_alien
@gnorts_mr_alien Месяц назад
man, what a video. thank you!
@minute_machine_learning5362
@minute_machine_learning5362 Месяц назад
great explanation
@rmmajor
@rmmajor 2 месяца назад
That is a masterpiece of a video! Many thanks for your work!
@amoghjain
@amoghjain 6 месяцев назад
Thank you so very much for sharing your insights and intuition behind soooo many concepts.
@SamuelAlbanie1
@SamuelAlbanie1 6 месяцев назад
Glad it was helpful!
@soylentpink7845
@soylentpink7845 6 месяцев назад
Very good video - contents & it’s presentation!
@zainbaloch5541
@zainbaloch5541 2 месяца назад
Thank you so much!
@mattsong6875
@mattsong6875 6 месяцев назад
Thanks for such a informative and educational video
@vil9386
@vil9386 4 месяца назад
Wow, this video helped me a lot in understanding Attention and ViT. Packed with all the logics needed to design a solution using the latest as of this day.
@tomrichter9021
@tomrichter9021 4 месяца назад
Great video
@shyb8079
@shyb8079 24 дня назад
Thank you for ur content.
@flamboyanta4993
@flamboyanta4993 6 месяцев назад
Excellent and clearly communicated. Thanks. question in 20:05 when discssing positional embeddings, the legend of the waves says dim 4,....dim 7. Here, does dim refer to the length of the pathch D? as in, we'll get as many sine waves as D dims ?
@geomanisgod
@geomanisgod 3 месяца назад
A+++ quality from other planets.
@flamboyanta4993
@flamboyanta4993 6 месяцев назад
Another question: in 30:00 discussing how early attention layers tend to focus on local features and deeper ones on more global features of the input. I didn't understand the significance of the x-axis (sorted attention head). is this just a count of how many attention head there are in the respective block? Which suggests that in the large data regime, even early attention blocks with 14+ heads will also tend to observe the features globally? Is this correct? And thank you in advance!
@miraclemaxicl
@miraclemaxicl 3 месяца назад
More Compute Is All You Need
@iez
@iez 3 месяца назад
any ViTs that are open source?
@capsbr2100
@capsbr2100 3 месяца назад
So for someone approaching this now, working on resource-constrained devices, both for training and inference, it makes more sense to just stick to CNNs?
@AKD-le2kb
@AKD-le2kb 2 дня назад
w
Далее
Self-supervised vision
28:48
Просмотров 4 тыс.
Transformers explained | The architecture behind LLMs
19:48
Vision Transformer and its Applications
34:38
Просмотров 38 тыс.
The Most Important Algorithm in Machine Learning
40:08
Просмотров 272 тыс.
Vision Transformers explained
13:44
Просмотров 29 тыс.
Vision Transformer for Image Classification
14:47
Просмотров 112 тыс.