Thanks for nice explanation But I have one query, in part 16:00 where you said "each output neuron get input from every neuron across the depth of previous layer", here doesn't that make each output depth neuron same??
Professor Bryce, you are a splendid teacher. I enjoy watching all your course's videos, especially the explanation depth - and at the same time pinpointed simplicity - are amazing. Thank you! Have a nice day!
Isn't this similar to RNNs where subsets of data is used for each epoch & in residual network, a block of layers is injected with fresh signal, much like boosting.
great video, will finish the deep learning playlist, found it's really explained , just one question: why is that ''the gradient vector will point in the direction that increase the loss''? 19:08
It's by definition. "The gradient at a point p gives the direction and the rate of fastest increase." So in our case, the gradient to the loss function, will point in the direction of fastest increase for the loss function.
I am writing a thesis on content-based image retrieval and I had to understand the ResNet architecture in-depth and by far this is the most transparent explanation ever!!
I absolutely appreciate your methodic approach to this subject. You provided not only the "big picture" but also showed each frame and explained what was happening; you even took the time to do a recursive function. I am learning Java as my first high-level programming language, and stack diagrams are part of the exercises. Thank you for your work.
Hey! I happened to stumble across these videos before my final exam, and everything is starting to click. Thank you so much for your awesome stack diagrams!
I want to thank you. Thank you from Vietnam. I hope that you love what you do and achieve more success. Hope that there is a course about randomize algorithm in the future