Тёмный

Lecture 7: Convolutional Networks 

Michigan Online
Подписаться 24 тыс.
Просмотров 52 тыс.
50% 1

Опубликовано:

 

24 сен 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 29   
@jh97jjjj
@jh97jjjj Год назад
Great lecture for free. Thank you Michigan University and professor Justin.
@temurochilov
@temurochilov 2 года назад
Thank you I found answers to the questions that I have been looking for long time
@faranakkarimpour3794
@faranakkarimpour3794 2 года назад
Thank you for the great course.
@tatianabellagio3107
@tatianabellagio3107 3 года назад
Amazing! Pd: Although I am sorry for the guy with the coughing attack...........
@kobic8
@kobic8 Год назад
yeah, kinda disturbed me to concentrate. 2019 it was right before covid striked the world hahah 😷
@hasan0770816268
@hasan0770816268 3 года назад
33:10 stride 53:00 batch normalization
@alokoraon1475
@alokoraon1475 7 месяцев назад
I have this great package for my university course.❤
@jijie133
@jijie133 3 года назад
Great.
@rajivb9493
@rajivb9493 3 года назад
at 35:09, the expression for output in case of stride convolution is (W - K + 2P)/S +1...for W=7, K=3, P = (K-1)/2 = 1 & S=2 we get output as (7 - 3 + 2*1)/2 + 1 = 3 +1 = 4 ...however, the slide shows the output as 3x3 instead of 4x4 at the right hand corner... is it correct..?
@bibiworm
@bibiworm 3 года назад
I have the same question.
@krishnatibrewal5546
@krishnatibrewal5546 3 года назад
both are different situations, the calculation is done without padding whereas the formula is written considering padding
@rajivb9493
@rajivb9493 3 года назад
@@krishnatibrewal5546 ... thanks a lot, yes you're right..
@bibiworm
@bibiworm 3 года назад
@@krishnatibrewal5546 thanks.
@eurekad2070
@eurekad2070 3 года назад
Thank you for exellent video! But I have a question here, at 1:05:42, after layer normalization, every sample in x has shape 1xD, while μ has shape Nx1. How do you perform the subtraction x-μ?
@yicheng1991
@yicheng1991 3 года назад
I wonder if gamma and beta with 1 x D is a typo? If it should be N x 1? If it is not a typo, doing the subtraction is just using the broadcasting mechanism like in numpy.
@eurekad2070
@eurekad2070 3 года назад
@@yicheng1991 Broadcasting mechanism makes sense. Thank you.
@vaibhavdixit4377
@vaibhavdixit4377 Месяц назад
Just finished watching the lecture, as per my understanding, X (1 X C X H X W) is the shape of the input vector consumed at once in the algo, and for the calculated means and standard deviations they have mentioned the shape of the output vectors of these parameters in terms of batch size (N X 1 X 1 X 1) as each value uniquely represents each input (1 X C X H X W). It is a late reply but I am replying if someone else would scroll through with similar question to yours!
@puranjitsingh1782
@puranjitsingh1782 2 года назад
Thanks for an excellent video Justin!! I had a quick question on how does the conv. filters change the 3d input into a 2d output
@sharath_9246
@sharath_9246 2 года назад
When you dot product 3d image example(3*32*32) with filter(3*5*5) gives a 2d feature map (28*28) just bcoz of the dot product operation between image and filter
@intoeleven
@intoeleven 3 года назад
why they don't use batch norm + layer norm together?
@bibiworm
@bibiworm 3 года назад
1:01:30 what did he mean by “fusing BN with FC layer or Conv layer”?
@krishnatibrewal5546
@krishnatibrewal5546 3 года назад
You can have conv-pool-batchnorm-relu or fc- bn- relu , batch norm can be induced between any layer of the network
@bibiworm
@bibiworm 3 года назад
@@krishnatibrewal5546 thanks a lot!
@yahavx
@yahavx Год назад
Because both are linear operators, then you can simply concat them after training (think of them as matrices A and B, in test time you multiply C=A*B and you put that instead of both)
@rajivb9493
@rajivb9493 3 года назад
In Batch Normalization during Test time at 59:52, what are the averaging equations used to average Mean & Std deviation, sigma ..during the lecture some mention is made of exponential mean of Mean vectors & Sigma vectors...please suggest.
@ibrexg
@ibrexg 10 месяцев назад
Well don! here is more explanation to normalization: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-sxEqtjLC0aM.html&ab_channel=NormalizedNerd
@magic4266
@magic4266 Год назад
sounds like someone was building duplo the entire lecture
@brendawilliams8062
@brendawilliams8062 Год назад
Thomas the tank engine?
@park5605
@park5605 4 месяца назад
ahem ahem ahem ahem ahem ahem ahem ahem ahem ahem ahem ahem ahem ahem ahem ahem ahem ahem ahem ahem ahem ahem . ahem ahem. ahe ahe he he HUUUJUMMMMMMMMMMMM
Далее
Lecture 8: CNN Architectures
1:12:03
Просмотров 44 тыс.
But what is a convolution?
23:01
Просмотров 2,6 млн
Avaz Oxun - Turqi sovuq kal
14:50
Просмотров 391 тыс.
🌭 BBQ Chili Dog Skillet #Shorts
00:36
Просмотров 3,2 млн
Свожу все свои тату (abricoss_a_tyt)
00:35
Ничего не делаю всё видео 😴
00:33
Lecture 6: Backpropagation
1:11:16
Просмотров 97 тыс.
Convolutional Neural Networks from Scratch | In Depth
12:56
Receptive Fields: Why 3x3 conv layer is the best?
8:11
MIT Introduction to Deep Learning | 6.S191
1:09:58
Просмотров 575 тыс.
Quantization of Neural Networks [in Russian]
1:09:49
Просмотров 1,5 тыс.
Why Does Diffusion Work Better than Auto-Regression?
20:18
CNN Receptive Field | Deep Learning Animated
10:28
Просмотров 4,2 тыс.
Avaz Oxun - Turqi sovuq kal
14:50
Просмотров 391 тыс.