Тёмный
No video :(

Visualizing Convolutional Filters from a CNN 

deeplizard
Подписаться 151 тыс.
Просмотров 126 тыс.
50% 1

Опубликовано:

 

22 авг 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 107   
@deeplizard
@deeplizard 5 лет назад
Check out the blog for this video here: deeplizard.com/learn/video/cNBBNAxC8l4
@debgandharghosh3981
@debgandharghosh3981 Год назад
The github repository in the blog isn't available anymore
@Arcaerus
@Arcaerus Год назад
I've been struggling with this subject because my prof can't explain things but you explain it so clearly!!!!!!!!! Thank you so much!
@inbb510
@inbb510 3 года назад
I was looking for this kind of video for ages. Everytime when I see tutorials on CNNs and building it through code, they never ever explain what sort of filters are being used in the architecture. This video cleared by confusions on this matter. Thank you very much.
@MuhammadArnaldo
@MuhammadArnaldo 3 года назад
so far this channel is the best to learn machine learning. I hope you keep uploading more videos... about RNN, LSTM, segmentation, spiking NN... and more maybe
@liammellor3270
@liammellor3270 5 лет назад
Just wanted to say, I love your videos. They are very informative and explained extremely well, please continue doing work in this area of work!!
@deeplizard
@deeplizard 5 лет назад
Thank you, Liam!
@tymothylim6550
@tymothylim6550 3 года назад
Thank you very much for the video! I learnt quite a lot more from seeing the different complexities of the different conv layers!
@HAL9OOOTUBE
@HAL9OOOTUBE 6 лет назад
Just finished watching all 22 of these videos, they were super helpful! Just getting started with TF and was completely lost without understanding the vocabulary and concepts. Still lost but a little less so now :). Hopefully you decide to keep making these types of videos and I will recommend them to anyone looking to get into ML.
@deeplizard
@deeplizard 6 лет назад
Hey Javed, thanks for letting me know! I'm glad these videos were helpful for you. If you're interested, I also have a Keras playlist that goes through some basics of building the network, training, predicting, etc. Keras is built on top of Tensorflow and is a higher level neural network API. Either way, good luck on your ML journey! ru-vid.com/group/PLZbbT5o_s2xrwRnXk_yCPtnqqo4_u2YGL
@HAL9OOOTUBE
@HAL9OOOTUBE 6 лет назад
Yeah I'll probably be taking a look at those too this weekend even though I have to focus on TF for my current project, thanks!
@ygpsk3860
@ygpsk3860 5 лет назад
thank you for these videos! amazing series... been binge-watching your channel for the past few hours, and feel like i learned more than i did in previous few months
@familywu3869
@familywu3869 2 года назад
Same here
@ayush612
@ayush612 5 лет назад
Awesome!!! In just 2 videos I feel I have got a deeper intuition of this whole thing! Thank you!
@comalab2387
@comalab2387 6 лет назад
Surprising how the inner mechanics of a neural network can sometimes be visualized in a humanly comprehensive way. Most of the time i only encounter chaos ^^ Cool demo!
@deeplizard
@deeplizard 6 лет назад
Totally agree, Coma Lab! And thank you!
@edobr3384
@edobr3384 5 лет назад
Thanks for the video! I was wondering how to explain what is a filter in a CNN to my students in an easy way and I found your video :3 ty!
@qusayhamad7243
@qusayhamad7243 3 года назад
thank you very much for this clear and helpful explanation.
@alfadhelboudaia1935
@alfadhelboudaia1935 3 года назад
I am so hyped to discover your channel, would hope you upload videos on GANs, VAEs, LSTM, NFs.
@CosmiaNebula
@CosmiaNebula 4 года назад
0:30 overview and link to full post 1:50 the jupyter notebook 3:20 results 5:00 recall previous lesson
@sinamohamadi9580
@sinamohamadi9580 3 года назад
absolutely awesome.
@justchill99902
@justchill99902 5 лет назад
You are right! It is very interesting. One of the best videos in this series!
@deepcodes
@deepcodes 4 года назад
Great channel!!!, such an ease to learn these topics.
@davidmitchell9934
@davidmitchell9934 6 лет назад
Great job on these videos! You're great at explaining the intuition of these analyses. Do you work in a data science field?
@deeplizard
@deeplizard 6 лет назад
Thank you, David! I'm glad you're liking the videos. My experience in data science comes from personal projects and research. 🤓
@MVTN
@MVTN 5 лет назад
Thanks for the video, it was really helpful
@mavee_shah
@mavee_shah 4 года назад
Deeplizard: thanks for watching the video Me: No thanks for even existing and bringing this content to my life, you're a blessing to be found by anyone so thankyou!
@nithinkvijayan2708
@nithinkvijayan2708 4 года назад
Your videos are so informative. Glad I found this channel. Thank you, you should have more subs.
@DanielBurrueco
@DanielBurrueco 6 лет назад
Great video! There's something I don't get yet (I haven't gone through the code): a filter is usually small (3x3, 5x5, 7x7...) but those filters showing patters seemed huge (compared to the ones I imagined). They must be no less than... 32x32. Are they really that big, or am I missing something?
@deeplizard
@deeplizard 6 лет назад
Thanks, Daniel! You're right, filters are usually small. In fact, the filters for each convolutional layer within the VGG16 network (used in this video) are all 3x3. The visuals we saw of the gray squares with patterns on them are not the filters themselves. Rather, we're passing the network a 128x128 plain gray image with some random noise with the objective of being able to visualize what sort of input would maximize the activation for any given filter. The outputs from this process are the transformed grey 128x128 images that would maximize the corresponding activations the most. Let me know if this helps clear things up.
@DanielBurrueco
@DanielBurrueco 6 лет назад
Hi, I didn't answer before because I didn't completely understand it. But I came across some nice code from the Keras team that does exactly what you said: it creates a loss function that maximizes the activation for the filter whose activation map we want to visualize. This is the code: github.com/keras-team/keras/blob/master/examples/conv_filter_visualization.py I'm not quite sure about how the K.gradients function works, but assuming it works, it's not difficult to visualize any filter of any layer. Amazing. Only after having played with it I can say I understand it. Thanks
@deeplizard
@deeplizard 6 лет назад
Yes, definitely took me some time to play with the code in this video myself before I fully got what exactly it was doing. Also, yes, that link is to the same code we used here, so you're on the right track! I'm glad you were able to develop an understanding for it. :)
@Biggzlar
@Biggzlar 4 года назад
It's weird, the video is so well done but you neglect to mention the entire idea of the processing code. That we perform gradient ascent and instead of updating our network with the gradients, we update the input image. Thus the image gets morphed into a matrix, that maximizes the filter activation. Came here to learn this but had to read the blog post instead.
@noeltam75
@noeltam75 6 лет назад
Sorry I am still not able to understand the last part, when you illustrate the dog face. Why are we not seeing the edges of the object in your demonstration, instead we see only random patterns. I know you have explained in the video, but can you simplify what do you mean again?
@deeplizard
@deeplizard 6 лет назад
Hey Noel - So, in this video, we were passing the network a plain gray image with some random noise with the objective of being able to visualize what sort of input would maximize the activation for any given filter. The images that we're visualizing from these filters are the transformed grey images that would maximize the corresponding activations the most. In the previous video of this playlist on CNNs, however, we did something a bit different. There, we looked at the patterns that a given filter was able to detect from _specific images_ (dog faces, etc.) that highly activated the filter. Let me know if this helps clear things up.
@sgrimm7346
@sgrimm7346 2 года назад
Nice video....you should have more views. My question is, HOW do the filters learn which features to look for? Example, how does one filter learn vertical lines and another filter learn horizontal lines? And eventually, the higher order filters learning angles and textures? Thank you.
@gamma_v1
@gamma_v1 6 лет назад
The previous 21 videos were very clear. But this one had a lot of gaps. For example the code explanation was very short. Great work though. Keep up the good work.
@deeplizard
@deeplizard 6 лет назад
Hey Gamma - Appreciate the feedback. My intention was to give a high-level overview of the code and focus more on how we can interpret the visualizations. Maybe a I'll add a video to the Keras playlist (below) where we go over the specifics of this program. In that series, the focus is on all the code-level details :) ru-vid.com/group/PLZbbT5o_s2xrwRnXk_yCPtnqqo4_u2YGL
@Paul-lt7ij
@Paul-lt7ij 6 лет назад
Sweet voice :)
@mohamedmahdy969
@mohamedmahdy969 5 лет назад
Hi, How are you ? really you did a great job. I watched your previous videos in this list and I can say i got a good understanding because of your well preparation of the video and your simple presentation of the information. yet, in this video, I watched it twice and still i have the feeling that there are a lot of missing parts to me. 1- I understand that, in this video, you are passing a gray input image with random noise and you are displaying to us or (visualizing) the input images that will give the most activation for the filters. Am i correct ? "Please correct me if i am wrong". (So, your input input is gray images with random noise and what we are seeing in the end of the videos are the instances of the input images which most activated certain filters) 2- if I am right in the first part, I don't understand what is conv1-block1, conv2_block3, and so on. can u explain it in the comment? and why we are seeing many of the images not only just the one the most activated the filter ? sorry if I asked the wrong question maybe I misunderstood the whole video; in this case, can you simply correct me ?
@deeplizard
@deeplizard 5 лет назад
Hey Mohamed - You are absolutely correct for number (1), and your questions are completely valid! For (2), "conv1-block1" means the first convolutional layer in the first group of convolutional layers. So, for example, a network may have five convolutional layers that are then followed by some dense layers, and then another group of five convolutional layers followed again by some more fully connected layers. We're calling these groups of convolutional layers "convolutional blocks." So, the first group would be block1, the second group would be block2. When we want to refer to the first conv layer in block1, we say "conv1-block1." Additionally, the 25 squares for one convolutional layer represent the 25 different filters contained within the layer. Each filter detects a different pattern. Does this all make sense?
@mohamedmahdy969
@mohamedmahdy969 5 лет назад
@@deeplizard Thanks a lot for your response and declaration. You still dedicated and giving a quick response even after 9 months after posting the video. Now, I can say that there are sets of convolution layers spread within the hidden layers. The set is called block. So, the first set of the convolution layers is block-1, and the conv-3 is the third convolution layer in the set, of course according to which block we are talking about. Moreover, each convolution layer is composed of number of filters. Therefore, if we are talking about block-1 conv-2, what we are seeing is the instances of the input gray image with random noise that activated the filters in the 2nd convolution layer in the 1st set, block, of convolution layers. Thanks again. I hope I got it right.
@deeplizard
@deeplizard 5 лет назад
Yes, this is a completely accurate explanation! Great job!
@gaureesha9840
@gaureesha9840 5 лет назад
In this video the layers i.e. conv1-block1, conv2-block3 are just the filters(weights) that we get after each layer. We did not convolute these filters with anything. They are just filters that we have learned. In the last part she says that now when we apply those filters i.e. convolute them with dog pics, then we get a convolutions that actually look like dogs.
@pseudooduesp2805
@pseudooduesp2805 6 лет назад
thanks for video
@deeplizard
@deeplizard 6 лет назад
Machine Learning / Deep Learning Tutorials for Programmers playlist: ru-vid.com/group/PLZbbT5o_s2xq7LwI2y8_QtvuXZedL6tQU Keras Machine Learning / Deep Learning Tutorial playlist: ru-vid.com/group/PLZbbT5o_s2xrwRnXk_yCPtnqqo4_u2YGL
@ling6701
@ling6701 5 лет назад
Link to previous video is here: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-YRhxdVk_sIs.html
@Kenspectacle
@Kenspectacle 2 года назад
How does the convolutional network layers and block work exactly in the example in the video? like, what does block5_conv2 refers to exactly?
@messapatingy
@messapatingy 6 лет назад
What were the images used to train this CNN. Wild guesses - Cells, Fabrics, Snakes.
@deeplizard
@deeplizard 6 лет назад
Hey Andre - The network was trained on images from the Imagenet library: image-net.org/explore
@saigopalpotturi2926
@saigopalpotturi2926 3 года назад
could you please explain code line by line for better understanding
@thespam8385
@thespam8385 4 года назад
{ "question": "Gradient ascent differs from gradient descent in trying to _______________ loss in order to _______________", "choices": [ "maximize / emphasize pattern detection of the filter", "minimize / increase accuracy", "maximize / identify overfitting", "minimize / isolate the activation function" ], "answer": "maximize / emphasize pattern detection of the filter", "creator": "Chris", "creationDate": "2019-12-14T01:42:45.651Z" }
@deeplizard
@deeplizard 4 года назад
Thanks, Chris! Just added your question to deeplizard.com
@paragjp
@paragjp 4 года назад
Why we want to maximize our loss ? Not understood very clearly. Secondly once we have maximum loss then how we reduced to minimum loss further ? Can you pl explain Thanks
@johanneszwilling
@johanneszwilling 6 лет назад
😳 Where do the very first filters come from? Are they only always those four from the beginning, filtering for straight edges up, down, left, right?
@deeplizard
@deeplizard 6 лет назад
Hey Joe - Are you referring to the filters that are shown at 0:26 in the video? If so, those filters were pulled from the previous video in the playlist on CNNs. In that video, I go into more detail about filters. ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-YRhxdVk_sIs.html I just created these filters for illustration purposes to show what an "edge detector" filter would look like. In general though, all the filters throughout a network are randomly initialized, and the values will change during training. With this being said, networks have far more complex filters than the ones I showed at 0:26.
@ismailelabbassi7150
@ismailelabbassi7150 2 года назад
i love you
@rishabjain9275
@rishabjain9275 3 года назад
hey, what is the difference between block3_conv2 and block2_conv2 ?
@tallwaters9708
@tallwaters9708 6 лет назад
Thanks for the video. Could you please clarify a bit more what the visualisation of the dog faces at the end was? Was that a deep layer's filter applied to the raw input image? Or something else?
@deeplizard
@deeplizard 6 лет назад
Hey TallWaters - So, in this video, we were passing the network a plain gray image with some random noise with the objective of being able to visualize what sort of input would maximize the activation for any given filter. The images that we're visualizing from these filters are the (transformed grey) images that would maximize the corresponding activations the most. In the previous video of this playlist on CNNs (ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-YRhxdVk_sIs.html), however, we did something a bit different. There, we looked at the patterns that a given filter was able to detect from specific images that highly activated the filter. That's what we were looking at with the dog faces example. Let me know if this helps clear things up.
@tallwaters9708
@tallwaters9708 6 лет назад
Oh so you're just taking the raw filter in deeper layers and running it over certain training/test images? But those filters would usually be for blocks with more dimensions right? I mean initially the filters would be like (5, 5, 3) where 3 represents RGB colours. But the later layers would have filters like (8, 8, 64) perhaps? Am I totally misunderstanding? :(
@SvSzYT
@SvSzYT 3 года назад
first, what is the difference between keras.layers.Conv2D(32, (3, 3)) and keras.layers.Conv2D(256, (3, 3)) ?
@shamimibneshahid706
@shamimibneshahid706 4 года назад
Hi, I just don't understand how filters can be visualised. And also why would you say that some filters are highly activated for some images? How can can filters be activated? I mean filters can only extract features from an input image or matrix. So what does does it mean when you say they can be visualised or highly activated?
@messapatingy
@messapatingy 6 лет назад
I may have spoken too soon - having watched passed that point - but even then, I'm not sure what I'm seeing, which can't be good, right?
@deeplizard
@deeplizard 6 лет назад
Hey Andre - So, in this video, we were passing the network a plain gray image with some random noise with the objective of being able to visualize what sort of input would maximize the activation for any given filter. The images that we're visualizing from these filters are the (transformed grey) images that would maximize the corresponding activations the most. In the previous video of this playlist on CNNs, however, we did something a bit different. There, we looked at the patterns that a given filter was able to detect from specific images that highly activated the filter. At 4:57 in this video, I attempt to make that point. Let me know if this helps clear things up.
@tomwu163
@tomwu163 5 лет назад
Could you please clarify how the loss function for maximizing the activation works? I don't understand what each gradient ascent step actually updates, since here our trained model already has fixed weights (which is what the loss function for the output of the model updates) and also a fixed input picture. So what can this loss function possibly be maximized on, in order to visualize what the filter is looking for?
@Sikuq
@Sikuq 4 года назад
Great video. Thanks. I understand the general model of conv16,maxpool,conv32, maxpool ... flatten, dense , dense, non-linearity. But do all those filter values respectively get added into one image, or is it simply a single filter value given a success rating during training and then those total yes rated filter images use in essence human understanding "gestalt" to a pixel figure we call say 8. So the more layers and the more neurons in each layer give us more fine tuning control to a point? So a cat's two ears is a function of perhaps 100 filtered success rated images?
@deeplizard
@deeplizard 4 года назад
Hey Christian - Yes, your latter explanation is correct :) The section called Output Channels And Feature Maps in the blog below may be helpful in this area as well. deeplizard.com/learn/video/k6ZF1TSniYk
@Sikuq
@Sikuq 4 года назад
​@@deeplizard Thank you so much for your answer and exciting reference. I need to learn more Pytorch obviously. Every time I think I know something, you have another video I need to learn. And you have a long list of videos making Deeplizard the best learning source online bar none. On your vlog you quit your jobs but it looks like you have more than two full-time jobs now, lol.
@jeevithavk5084
@jeevithavk5084 5 лет назад
How to detect what kind of filter is used in CNN? Also how to visualize this filter? Kindly help me
@salman261996
@salman261996 4 года назад
Is there a way for me to extract the values of the filters and inspect them?
@sgt.mcgragon359
@sgt.mcgragon359 5 лет назад
Halo, does number of filters depends on number of nodes ?....like one filter per node in a conv layer?
@deeplizard
@deeplizard 5 лет назад
Yes. Only, the input and output are channels. This is why diagrams of CNNs look like the one at the top of the page here: deeplizard.com/learn/video/k6ZF1TSniYk
@tunkyi7162
@tunkyi7162 5 лет назад
Need help please answer, I would like to know if you pass a grey image suppose it is a one channel, then after convolving with the filters, why the image appears colorful like green, pink in your video. It has something to do with RGB channels. Please explain thanks.
@Mia-vz6yt
@Mia-vz6yt 4 года назад
Thanks for the video. But may we have the code that you used in the video?
@deeplizard
@deeplizard 4 года назад
Hey Mia - The code is based on the blog referenced at the start of the video: blog.keras.io/how-convolutional-neural-networks-see-the-world.html
@akshatgarg6635
@akshatgarg6635 4 года назад
img_width img_hight not defined?
@mariaarbenina6551
@mariaarbenina6551 3 года назад
Hi. I can't find the notebook you're using in this video on your website. I found deep-learning-fundamentals-deeplizard.ipynb but it doesn't have the Visualizing Convolutional Filters from a CNN part. Where can I found it? Thank you.
@deeplizard
@deeplizard 3 года назад
Hey Maria - The code used in this episode is from this original Keras blog: blog.keras.io/how-convolutional-neural-networks-see-the-world.html As stated there, the author has since updated the blog, now at this link: keras.io/examples/vision/visualizing_what_convnets_learn/ The corresponding github link and Jupyter Notebook for the updated code from the blog are below: colab.research.google.com/github/keras-team/keras-io/blob/master/examples/vision/ipynb/visualizing_what_convnets_learn.ipynb github.com/keras-team/keras-io/blob/master/examples/vision/visualizing_what_convnets_learn.py
@mariaarbenina6551
@mariaarbenina6551 3 года назад
@@deeplizard Thanks! Great series, by the way, thank you for your work! I wish you were my uni professor.
@uaeustream2562
@uaeustream2562 4 года назад
First, this is great work. I find error in using: grads = K.gradients(loss, input_img)[0] and I am also not sure if I have to insert an image at (I am using it as it is): input_img = model.input Can you help in running this?
@Blank027-r5p
@Blank027-r5p 4 года назад
How can i know filter value?
@loaialamro9699
@loaialamro9699 4 года назад
Great tutorial, where can I find the filters images after run the code because I can't find them the in code folder. Thank You
@deeplizard
@deeplizard 4 года назад
At 2:45 in the episode, you can see the path to where the images should be saved. You should create a directory called conv_images inside of the directory for which your code resides. The images should be saved in conv_images if you follow the same code shown at 2:45.
@BruceSchwartz007
@BruceSchwartz007 5 лет назад
What do you mean by a the "first convolutional block of the first convolutional layer"?
@ashwinv8305
@ashwinv8305 4 года назад
conv1-block1" means the first convolutional layer in the first group of convolutional layers. So, for example, a network may have five convolutional layers that are then followed by some dense layers, and then another group of five convolutional layers followed again by some more fully connected layers. We're calling these groups of convolutional layers "convolutional blocks." So, the first group would be block1, the second group would be block2. When we want to refer to the first conv layer in block1, we say "conv1-block1." Additionally, the 25 squares for one convolutional layer represent the 25 different filters contained within the layer. Each filter detects a different pattern.
@sharkk2979
@sharkk2979 2 года назад
i watched all the series. I am impressed by mandy !! wish I could get girlfriend like her.
@Paul-lt7ij
@Paul-lt7ij 6 лет назад
Am i the only person who thought her voice is so sweet?
@tobiask5131
@tobiask5131 5 лет назад
What annoys me is this: everyone does the vgg16 model but what if I actually trained my own keras model? As far as I can tell this code only works for this specific example, throwing weird assertion errors without explanation and really is no help if you have another model.
@tobiask5131
@tobiask5131 5 лет назад
Now I get errors like: "Could not find resource: localhost/conv2d_9/bias" so it really seems to be model specific. That's just no help at all..
@giovannisinclair9785
@giovannisinclair9785 5 лет назад
What are the requirements of the convolutional neural net?
@deeplizard
@deeplizard 5 лет назад
Hey Giovanni - Check out the previous video and blog where this is explained: deeplizard.com/learn/video/YRhxdVk_sIs Additionally, the video and blog below explain even more technical details regarding CNNs as well: deeplizard.com/learn/video/k6ZF1TSniYk
@guardrepresenter5099
@guardrepresenter5099 5 лет назад
These picture which shown in video are filter or future map?Sorry i confuse
@shirleyhe4941
@shirleyhe4941 3 года назад
I guess the pictures are feature maps , not filters , so I am confused also .
@ujjwalkumar8173
@ujjwalkumar8173 3 года назад
what are blocks in a convolutional layer??
@deeplizard
@deeplizard 3 года назад
conv block == group of conv layers
@ujjwalkumar8173
@ujjwalkumar8173 3 года назад
@@deeplizard Well I had'nt expected that u will reply .. bcz this video was posted almost 3 years ago..Still u are maintaining it ..that's something awesome.. Loving ur series :)
@riop7600
@riop7600 6 лет назад
Could u please explain the code with more details ..Thank you
@deeplizard
@deeplizard 6 лет назад
Hey Rio - Are you asking about the code specific to this video?
@devsutong
@devsutong 4 года назад
wish i could contribute to your patreon page😒
@matharbarghi
@matharbarghi 4 года назад
No access to real code and no discussion about the code in the video....
@deeplizard
@deeplizard 4 года назад
We summarize the code from 1:15 to 3:12. The full code is available at the Keras link in the description.
@spamspamer3679
@spamspamer3679 5 лет назад
has anyone a good German or English video or website for learning the dot product of matrices?
@deeplizard
@deeplizard 5 лет назад
I elaborate more on this in the corresponding blog in the section "a note about the usage of the dot product": deeplizard.com/learn/video/YRhxdVk_sIs
@freakphysics
@freakphysics 6 лет назад
I love you girl, we should go out.
Далее
Batch Normalization (“batch norm”) explained
7:32
Просмотров 223 тыс.
ОБЗОР ПОДАРКОВ 🎁 | WICSUR #shorts
00:55
The moment we stopped understanding AI [AlexNet]
17:38
Просмотров 939 тыс.
Inside a Neural Network - Computerphile
15:42
Просмотров 426 тыс.
Max Pooling in Convolutional Neural Networks explained
10:50
Why Does Diffusion Work Better than Auto-Regression?
20:18
This is why Deep Learning is really weird.
2:06:38
Просмотров 382 тыс.