Тёмный
No video :(

04.1 - Natural signals properties and the convolution 

Alfredo Canziani
Подписаться 39 тыс.
Просмотров 15 тыс.
50% 1

Опубликовано:

 

28 авг 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 66   
@vaibhavsingh8715
@vaibhavsingh8715 3 года назад
This is by far the best lecture on signal properties. I am binging on these videos. Thank you so much Prof Canziani.
@alfcnz
@alfcnz 3 года назад
😍😍😍
@mahdiamrollahi8456
@mahdiamrollahi8456 3 года назад
I almost got my answer for my question in previous lecture... Thanks Alfredo... That was a nice lecture same as before
@alfcnz
@alfcnz 3 года назад
😇😇😇
@undisclosedmusic4969
@undisclosedmusic4969 3 года назад
This lecture is absolutely beautiful
@alfcnz
@alfcnz 3 года назад
Yay! 😇😇😇
@jonathansum9084
@jonathansum9084 3 года назад
I am pretty sure this whole online learning is more successful than before. In just 4 hours, we have about 380 views! Keep it up if you can!
@alfcnz
@alfcnz 3 года назад
Haha, I'll try! 😁😁😁
@pauldev8967
@pauldev8967 3 года назад
I could pause when I got confused. It's really helpful to offer the online version for such a complicated course. Thank you prof.
@alfcnz
@alfcnz 3 года назад
You're welcome 😊
@rabirajbanerjee3872
@rabirajbanerjee3872 3 года назад
This is awesome !!!!!
@alfcnz
@alfcnz 3 года назад
Yay! 🥳🥳🥳
@mohamedrefaat197
@mohamedrefaat197 Год назад
Awesome lecture! I'm wondering why is the convnet not sparse compared to the fully connected? They have very similar size of parameters
@alfcnz
@alfcnz Год назад
What do you mean it’s not sparse? The convolution matrices are full of zeros. Check out this video: Matrix multiplication, signals, and convolutions ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-d2GixptaHjk.html
@aparnaa4121
@aparnaa4121 2 года назад
Hey Alfredo, Hope you are doing good. I'm binge watching this awesome lectures, and even though I know the concepts a little, there is still so much which I didn't know. Do you think the commoners can access the assignments that you give to students? Thanks!!
@alfcnz
@alfcnz 2 года назад
They are on the class website. Check the video description. 😉😉😉
@your_name96
@your_name96 3 года назад
Thank you for this!!!
@alfcnz
@alfcnz 3 года назад
You're most welcome ☺️
@buoyrina9669
@buoyrina9669 2 года назад
32:00 locality , stationary and compositionality
@alfcnz
@alfcnz 2 года назад
💪🏻💪🏻💪🏻
@saurabhkumar-wj1nz
@saurabhkumar-wj1nz 3 года назад
Hello Alfredo, hope you are doing well. I really enjoy your content and appreciate the effort that you put into the making the presentation. If it's possible,i would be nice if you can make a video on how actual research is done in this field. Let's say i got some idea, then do we perform rigorous mathematical formulation of the idea and try to come with with some mathematical solution to justify the idea. Or do we just implement the idea using deep learning libraries?🙂
@alfcnz
@alfcnz 3 года назад
Yeah, it goes the other way around. You play around with something. Usually nothing works. You try to understand the little that actually worked. You write down some math, describing what you did.
@woddenhorse
@woddenhorse 2 года назад
27:16 First time any teacher had eye exercise for poor vision students 🥺🥺
@alfcnz
@alfcnz 2 года назад
🤓🤓🤓
@dr.mikeybee
@dr.mikeybee 3 года назад
Alf, I really enjoy these sessions, but I missed all the zoom sessions. Were they recorded? Is it possible to watch them?
@alfcnz
@alfcnz 3 года назад
That's what you're currently watching. 😉😉😉
@dimitri30
@dimitri30 2 месяца назад
Thank you for sharing. I'v one question about the NNs on scrambled data. If I had to make a prediction I would have said we will have an accuracy about 15%, not more, thanks to the amount of pixel that can help to determine which digit is corresponding. So is it enough to get an accuracy of 83-85% or there is something else? I supposed that the fully connected neural network would have duplicate the filters, but there is no change with the scrambled data.
@alfcnz
@alfcnz 2 месяца назад
I don’t understand the question. Try asking in your native language.
@dimitri30
@dimitri30 2 месяца назад
@@alfcnz Yes of course. If think my french explanation was not clear either. I would have assumed that with scrambled data, we would have had an accuracy of around 15%, not more (which is more than 10% thanks to the fact that by counting the number of pixels, the model can have an idea of which digit is the most probable). I have trouble understanding how the model can achieve as "good" results as 85% on scrambled data. Does the model count the number of pixels and determine it that way, or is there something else? I had assumed that in reality, the dense model would have worked like a ConvNet by learning the same kernels multiple times. Essentially, we would have had weight redundancy to get something similar to a ConvNet. Is it because of the lack of parameters in the dense network? If we had given a lot more parameters to it, would it have come back to having a ConvNet with weight redundancy to "simulate" the filter's movement? In french: J'aurai supposé qu'avec les données brouillées on aurait eu une précision d'environ 15% pas plus. (Ce qui est plus que 10% grâce au fait qu'en comptant le nombre de pixel le modèle peut avoir une idée de quel chiffre est le plus probable. J'ai du mal à comprendre comment le modèle peut avoir d'aussi "bon" résultats que 85% sur des données brouillées. Est-ce que le modèle compte le nombre de pixel et le détermine comme ça ou il y autre chose ? J'avais supposé qu'en réalité le modèle dense aurait fonctionné comme un ConvNet en apprenant plusieurs fois les mêmes kernel. En gros on aurait eu une redondance des poids pour avoir quelque chose de ressemblant à un ConvNet. Est-ce à cause du manque de paramètre du réseau Dense ? Si on avait donné beaucoup plus de paramètres à celui-ci, est-ce que ce serait revenu à avoir un convNet avec une redondance des poids pour "simuler" le déplacement du filtre ? Thank you
@alfcnz
@alfcnz 2 месяца назад
There's a lot going on in this question. First, let's address the fully-connected model. The model does not care if you scramble the input or if you don't. If smartly initialised, the model will learn *the same* weights but with a permutated order. That's why the model performance is (basically) the same before and after permutation. Until here, are you following? Do you have any specific question on this first part of my answer?
@dimitri30
@dimitri30 2 месяца назад
@alfcnz Thanks for your reply. I'm sorry for wasting your time, I just didn't pay enough attention to the fact that this is a DETERMINISTIC shuffle.
@alfcnz
@alfcnz 2 месяца назад
Oh, yes! It is! The point here was to show how convolutional nets should be used only when specific assumptions hold for the input data. 😊😊😊
@chrisj2841
@chrisj2841 3 года назад
Hi Alfredo, do you know a course similar to yours but on RL ? By similar I mean a free class with lectures, recitations AND code; I don't refer to the elements that are only in this class, these are unique :) Thank you !
@alfcnz
@alfcnz 3 года назад
Perhaps Sergey's course? ru-vid.com/group/PL_iWQOsE6TfURIIhCrlt-wj9ByIVpbfGc
@ismailamir5014
@ismailamir5014 3 года назад
Fantastic
@alfcnz
@alfcnz 3 года назад
❤️❤️❤️
@jishanahmed225
@jishanahmed225 Год назад
What did you mean by R^7? Is it like 7 feature variables?
@alfcnz
@alfcnz Год назад
You need to include the timestamp if you are talking about anything specific in this video 🙂
@jishanahmed225
@jishanahmed225 Год назад
@@alfcnz It was in 48.09 min on the discussion of Kernels-1D data
@SanataniAryavrat
@SanataniAryavrat 3 года назад
Awesome....
@alfcnz
@alfcnz 3 года назад
🥰🥰🥰
@youtugeo
@youtugeo 2 года назад
A kernel "looks" at each channel with a different pattern, right? I mean, are the weights of the kernel different for each channel?
@alfcnz
@alfcnz 2 года назад
Not only at the channel, but also at a portion of the domain! If your input is C × H × W (an image), a single kernel will be C × h × w, h ≤ H, w ≤ W.
@youtugeo
@youtugeo 2 года назад
@@alfcnz OK, so for a specific kernel, the weights Ci * h * w (where i is the number of channels) are different for each i
@alfcnz
@alfcnz 2 года назад
_C_ is the number of channels. I don't know what _i_ is supposed to be.
@youtugeo
@youtugeo 2 года назад
@@alfcnz Sorry, what I should have said is that "i" is the index of a channel. C1 is the first channel, C2 the second one etc. And for a specific kernel, C1 * w * h are different weights from C2 * w * h.
@alfcnz
@alfcnz 2 года назад
Nope. Again, _C_ is the number of channels. For a colour image, _C_ = 3. If you have _N_ kernels, then you end up having _N * C * h * w_ weights.
@bhavinmoriya9216
@bhavinmoriya9216 3 года назад
How do you get the drawing corresponding to your hand gestures? How do you do it?
@alfcnz
@alfcnz 3 года назад
Haha, using a Wacom tablet and After Effects. 😅😅😅
@bhavinmoriya9216
@bhavinmoriya9216 3 года назад
@@alfcnz Seems awesome :) Would love to implement in my lectures sometime:)
@alfcnz
@alfcnz 3 года назад
Cool! 🥳🥳🥳
@ChrisOffner
@ChrisOffner 3 года назад
6:33 Why don't you like the term _Multilayer perceptron?_
@alfcnz
@alfcnz 3 года назад
Good point! A perceptron assumes an Heaviside's step activation, which is clearly non differentiable. So, this name is frown upon in Yann's lab. (I used to be a former user of such term.)
@xXxBladeStormxXx
@xXxBladeStormxXx 3 года назад
Does any lecture discuss batch norm, layer norm, etc.?
@alfcnz
@alfcnz 3 года назад
Uh… the next one. Yann talked about them last year as well.
@xXxBladeStormxXx
@xXxBladeStormxXx 3 года назад
@@alfcnz All right cool, haven't started the next one yet. Thanks!
@alfcnz
@alfcnz 3 года назад
By next one, I mean L13, coming up next week.
@amritkumar-ge4tn
@amritkumar-ge4tn 3 года назад
awwwwwww
@alfcnz
@alfcnz 3 года назад
❤️❤️❤️
Далее
07 - Unsupervised learning: autoencoding the targets
56:42
журавли в пятницу
00:14
Просмотров 71 тыс.
나랑 아빠가 아이스크림 먹을 때
00:15
Просмотров 4,5 млн
04L - ConvNet in practice
51:41
Просмотров 10 тыс.
11 - Graph Convolutional Networks (GCNs)
57:34
Просмотров 9 тыс.
03L - Parameter sharing: recurrent and convolutional nets
1:59:48
13L - Optimisation for Deep Learning
1:51:32
Просмотров 7 тыс.
02L - Modules and architectures
1:42:27
Просмотров 22 тыс.
01L - Gradient descent and the backpropagation algorithm
1:51:04
01 - History and resources
50:18
Просмотров 99 тыс.
10L - Self-supervised learning in computer vision
1:36:13