I've always struggled to understand pooling and this to-the-point explanation was the missing piece in the puzzle. I cannot thank you guys enough for the great work and taking the time to explain everything in so much detail. I owe so much of my knowledge of Deep learning to this channel
u are such an awesome teacher! i am medical doctor with zero background to ML and your playlists are my go to place to grasp concepts before i dive in deep. Im grateful luv from ZIMBABWE
This is the best channel for machine learning on youtube! Thank you so much you really helped me out when I was studying for my exams. Keep up the good work!
Having watched your great explanatory videos on CNN and Zero padding, I am actually going to give a thumbs up on every video of yours I see before I even start watching! :)
Hey first of all I thank you for uploading this series, secondly "Deeplizard sounds cool and unorthodox" and lastly I liked the way you structured this entire series, short and crisp at the same time easy to understand and lot to learn for a newbie like me. Keep up the good work.
You made something that is supposed to be complicated and difficult... easy. Mind making a guide on quantum computing next? xD Fantastic work! Thank you
I don't think I've ever seen a youtube channel that beautifully sums up DL/ML concepts in a way that idiots and master coders can understand. I am genuinely disappointed that I didn't find your channel before I spent ages on reddit/stackoverflow! Hahah +1 Sub, Keep up the good work from all of us here in the comments!
Machine Learning / Deep Learning Tutorials for Programmers playlist: ru-vid.com/group/PLZbbT5o_s2xq7LwI2y8_QtvuXZedL6tQU Keras Machine Learning / Deep Learning Tutorial playlist: ru-vid.com/group/PLZbbT5o_s2xrwRnXk_yCPtnqqo4_u2YGL
I have a question. During the video on zero padding, you indicated that padding was useful to maintain the size in the original matrix. In your example in this video you include padding='same' on both of your Conv2D layer. But then you include a MaxPooling2d layer which cuts in the matrix from 20x20 to 10x10. This seems to negate or contradict the benefits of the padding='same' on the Conv2D layers. Please explain why keeping the original size of the matrix is good for the Conv2D layer, while reducing the original size of the matrix is good for the MaxPooling2d layer. Thanks!
When doing a convolution operation, if not using padding, then the data at the edges of the images will be completely thrown away and lost. To prevent this data loss, we use padding. Max Pooling, on the other hand, will indeed reduce the image size, but it does not throw data away. The original data from the image is used in the pooling operation to create the lower resolution image. Let me know if this makes sense.
@@deeplizard I understand the reason for the padding (to not lose data), but I'm not sure I understand your comment regarding pooling "does not throw any data away". Given a 2x2 filter, it looks at 4 items in the image, and uses the max value, and throws the other 3 away. So we go out of our way (padding) to lose as little as possible with the Conv2D operation, just to lose 75% of the image with the polling operation. everyone does it this way, so I know it is right. I simply can't wrap my mind around why this is not an issue.
Great video! I have a question though. Is it a standard procedure to have a max pooling layer after every convolution layer? Furthermore, how does one decide whether to put a max pooling operation after a conv layer and in which cases should we not put a max pooling layer after a conv layer?
Some questions: -does it make sense to have a grid of Y x Z, where Y Z, or/and the stride be different of any of those 2? -what happens if in the edges of the image we don't have a full block (remainders)? Do we still max it?
Thanks, ivzlccs! A Flatten() layer transforms the output from the previous convolutional layer into a 1D tensor so that it can be provided as input to the following Dense() layer. The two videos on learnable parameters in a CNN below may be helpful as well. There, when transitioning from the convolutional layer to the output layer, we discuss the flatten operation. ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-gmBfb6LNnZs.html ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-8d-9SnGt5E0.html
@@deeplizard Can you kindly also explain what dense layer does? The explanation that i have is that it connects layers but why would you have unconnected layers in the first place?
First of all, great video! Why is it a common tactic to keep increasing the number of filters in later layers? And why always in multiples of 2 (16->32->64)? Wouldn't it make to have the most filters in our original picture, as it contains the most information? Also, why did you add a dense layer before the first convolutional layer?
{ "question": "Stride refers to:", "choices": [ "how many units the filter slides between each operation.", "how many operations performed on each row.", "the size of the batch the operations are applied to at a time.", "the distance between the results of the operation in the resultant matrix." ], "answer": "how many units the filter slides between each operation.", "creator": "Chris", "creationDate": "2020-02-06T05:03:54.547Z" }