Тёмный

0:03 

Mikael Laine
Подписаться 2,9 тыс.
Просмотров 70 тыс.
50% 1

Опубликовано:

 

11 сен 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 116   
@shanruan2524
@shanruan2524 2 года назад
The best backpropagation explainer on youtube we have in 2022
@Orthodoxforever71
@Orthodoxforever71 3 года назад
Hey! This is the best explanation I have ever seen in the internet .I was trying to understand these concepts watching videos, etc but without positive results. Now I understand how these networks function and their structure. I have forgotten my calculus and here you explain the chain rule in very simple words anyone can understand. Thank you for these great videos and God bless.
@chinmay6144
@chinmay6144 Год назад
I can't thank you enough. I gave so much money by taking loan for one course but did not understand it there. Thank you for your help.
@qzwwzt
@qzwwzt 6 лет назад
Good Job! This is a tough subject and you tried, with success, simplify the explanation as much it was possible. I did the AndrewNG course at coursera, and his explanation even for me that had previous knowledge of maths involved was difficult to understand. Now I think you should implement in this algorithm in Python, for example.
@alexandrefabretti1174
@alexandrefabretti1174 3 года назад
hello Mikael, finally somone who is able to explain complexity by simplicity. Thank you very much to reveal secrets hidden by most of videos
@allenjerjiss3163
@allenjerjiss3163 4 года назад
you guys know that you can just turn up the volume right? Thank you Mike for breaking it down so clearly!
@originalandfunnyname8076
@originalandfunnyname8076 Год назад
amazing, I spend hours trying to understand this from different sources and now I think I finally understand, thank you!
@vunpac5
@vunpac5 4 года назад
Hi Mike, I want to thank you for this great explanation. I was really struggling to grasp the concept. No one else went quite as far in depth.
@rohitd7834
@rohitd7834 4 года назад
I was trying to understand this for a long! You made my day.
@JoeBurnett
@JoeBurnett 11 дней назад
Fantastic video! I wish you were still making videos on the subject of AI with this teaching method.
@ekoprasetyo3999
@ekoprasetyo3999 2 года назад
Struggling this subject for weeks, now i have better understanding after watching this video.
@farenhite4329
@farenhite4329 4 года назад
Knew what it was but never understood why. Thank you for this video!
@turkirob
@turkirob 4 года назад
Absolutely the best explanation for the backpropagation thank you thank you thank you
@murat2073
@murat2073 2 года назад
thanks man. You are a hero!
@gillesgardy8957
@gillesgardy8957 4 года назад
Thank you so much Mikael. Extremely clear. A good fundation before going further !
@denisvoronov6571
@denisvoronov6571 3 года назад
That's the best explanation I have seen. Thanks a lot!
@georgeruellan
@georgeruellan 3 года назад
Amazing explanation but the audio is painful to listen to
@nemuccio1
@nemuccio1 4 года назад
Great! Finally you understand something. Without a hidden layer it is a bit difficult to understand how to apply bckpropagation. But the thing that doesn't explain any tutorial is this and you would be the right person to teach us. I use keras but also python would be good; "How to create your own classification or regression dataset". Thank you.
@mikaellaine9490
@mikaellaine9490 4 года назад
Thank you for your comment! At the end of the video the generalized case is briefly explained. If you follow the math exactly as in the single-weight case, you will see it works out. If I find time, I may make a video about that, but it might be a bit redundant.
@xyzaex
@xyzaex 3 года назад
Simply outstanding , clear and concise explanation. I wonder how people with no calculus background learn deep learning?
@faisalriazbhatti
@faisalriazbhatti 3 года назад
Thanks Mikael, simplest explanation. You made my day mate.
@gulamm1
@gulamm1 3 месяца назад
The best explanation.
@klyntonh7168
@klyntonh7168 2 года назад
Thank you so much! Best explanation I’ve seen ever.
@imed6240
@imed6240 3 года назад
wow, so far the best explanation I found. So simple, thanks a lot !
@user-vi2fp6dl7b
@user-vi2fp6dl7b 6 месяцев назад
Good job! Thank you very much!
@flavialan4544
@flavialan4544 3 года назад
It is one of the best explanation on this subject! Thanks so much!
@joelmun2780
@joelmun2780 Год назад
totally underrated video. love it.
@muhammeddal9661
@muhammeddal9661 5 лет назад
Great job Mikael, you explained it very clear. Thank you
@AleksanderFimreite
@AleksanderFimreite 3 года назад
I understand the logic and the thoughts behind this concept. Unfortunately I just can't wrap my head around how to calculate it with these kinds of formulas. But if I saw a code example I would understand it without an issue. I don't know why my brain works like that. But mathematical formulas are mostly useless to me =(
@JoeWong81
@JoeWong81 5 лет назад
great explanation Mikael thanks a lot
@dabdas100
@dabdas100 4 года назад
Finally i understand this! Thanks
@trevortyne534
@trevortyne534 Год назад
Excellent
@talhanaeemrao4305
@talhanaeemrao4305 9 месяцев назад
There are some videos which you wish that it never end. This video in among top of these.
@datasow9493
@datasow9493 6 лет назад
thank you, it really helped me to understand the principle behind the backpropagation. In the future i would like to see how to implement it with layers that have 2 or more neurons. How to calculate the error for each neuron in that case, to be precise
@ahmidiedu7112
@ahmidiedu7112 2 года назад
Good Job! …. Thanks
@hasanabdlghani5244
@hasanabdlghani5244 4 года назад
Its not easy!! You made it easy. Thanks alot
@andrew-cb6lh
@andrew-cb6lh Год назад
very well explained👍
@wilfredomartel7781
@wilfredomartel7781 Месяц назад
great video!
@jarrodhaas
@jarrodhaas 2 года назад
good stuff! a clear, simple starting case to build on.
@cvsnreddy1700
@cvsnreddy1700 5 лет назад
Extremely good and easy explanation
@obsidianhead
@obsidianhead Год назад
Thank you, sir. Helped a smooth brain understand.
@chrischoir3594
@chrischoir3594 3 года назад
"as per usual? um what is usaul?
@stuartallen2001
@stuartallen2001 4 года назад
Thank you for this video it really helped me!
@petermpeters
@petermpeters 6 лет назад
something happened to the sound at 8:21
@jayanttanwar4703
@jayanttanwar4703 4 года назад
You got that right Peter Peters Peterss
@garychap8384
@garychap8384 4 года назад
Don't you hate it when the lecturer goes outside for a cigarette in the middle of a lecture... but continues teaching through the window. Yes, we get it... your powerpoint remote works through glass! But WE CAN'T HEAR YOU! XD
@mikaellaine9490
@mikaellaine9490 4 года назад
Yes, sorry about that!
@BrandonSLockey
@BrandonSLockey 3 года назад
@@garychap8384 LMFAO
@goksuceylan8844
@goksuceylan8844 3 года назад
Peter Peters Peterss Petersss
@user-jy5pu6bg5p
@user-jy5pu6bg5p 2 месяца назад
What about when we have like activation function like relu. Or etc ?
@raaziyahshamim4761
@raaziyahshamim4761 9 месяцев назад
What software did you use to write the stuff.. good lecture
@prof.meenav1550
@prof.meenav1550 2 года назад
good effort
@newcoder7166
@newcoder7166 5 лет назад
Excellent job! Thank you!
@jancsi-vera
@jancsi-vera 4 месяца назад
Wow, thank you
@thechen6985
@thechen6985 5 лет назад
Thank you very much. This helped alot. I now understand the lecture given to me
@TheStrelok7
@TheStrelok7 3 года назад
Thank you very much best explanation ever!
@dilbreenibrahim4128
@dilbreenibrahim4128 3 года назад
Please how can I update bias please some one answers me?
@onesun3023
@onesun3023 4 года назад
Where does the '-1' come from? It looks like it is in the position of y but y is 0.5. Not -1.
@onesun3023
@onesun3023 4 года назад
Oh. I see. the 2 was distributed to it but not to a.
@RagibShahariar
@RagibShahariar 4 года назад
Thank you Mikael for this concise lecture. Can you share a lecture with the cost function of logistic regression implemented in Neural Network?
@atlantaguitar9689
@atlantaguitar9689 Год назад
At 7:53 what are the values for a and y that have the parabola experiencing a minimum around 0.3334 when for a desired y value of 0.5 the value of "a" would have to be 0.5? That is, the min for the cost function occurs when a is 0.5 so why in the graph has the min for it been relocated to 0.3334 ?
@cachaceirosdohawai3070
@cachaceirosdohawai3070 5 месяцев назад
Any help dealing with multi-neuron layers?, the formulas in 11:19 look different for multi-neuron layers
@mikaellaine9490
@mikaellaine9490 5 месяцев назад
Check my channel for another example with multiple layers.
@danikhan21
@danikhan21 4 года назад
Good stuff. Thanks for contributing
@knowhowww
@knowhowww 5 лет назад
simple and neat! thanks!
@cdxer
@cdxer 5 лет назад
do you move back a layer after gettiing w_1 = 0.59? or after getting w_1 = 0.333
@_FLOROID_
@_FLOROID_ 3 года назад
What changes in the equation if I have more than just 1 Neuron per Layer though? Especially since they are cross-connected via more weights, I don't know exactly how to deal with this.
@3r1kz
@3r1kz 4 месяца назад
I don't know anything about this subject but I was understanding it until the rate of change function. Probably a stupid question but why is there a 2 in the rate of change function, as in 2(a-y). Is this 2 * (1.2 - 05)? Why the 2? I can't really see the reference to the y = x^2 but that's probably just me not understanding the basics. Maybe somebody can explain for a dummy like me. Wait maybe I understand my mistake, the result should be 0.4 right? So its actually 2(a-1) because otherwise multiplication goes first and you end up with 1.4?
@joemurray1
@joemurray1 3 месяца назад
The derivative of x^2 (x squared) is 2x. The cost function C is the square of the difference between actual and desired output i.e. (a-y)^2. Its derivative (slope) with respect to a is 2(a-y). We don't use the actual cost to make the adjustment, but the slope of the cost. That always points 'downhill' to zero cost.
@mehedeehassan208
@mehedeehassan208 2 года назад
How do we determine which way to go? I mean ,the direction of change in weight .If we are in the left side of concave curve?
@ksrajavel
@ksrajavel 4 года назад
Cool. Thanks Mikael!!!
@oposicionine4074
@oposicionine4074 Год назад
There is one thing I dont understand. Suposse you have two inputs, for the first input the perfect value is w1=0.33 But for the secon input, the perfect value would be w1 = 0.67. How would you compute the backpropagation to get the perfect value to minimize the cost function?
@accadia1983
@accadia1983 Год назад
Run multiple experiments with different inputs and measure the outcome: if the outcome is perfect, there is no learning. How would you answer the question?
@safiasafia9950
@safiasafia9950 5 лет назад
Thanks sir it is very good explanation
@MukeshArethia
@MukeshArethia 5 лет назад
very nice explanation!
@semtex6412
@semtex6412 8 месяцев назад
on 2:40, Mikael mentioned "...and the error therefore, is 0.5" i think he meant "and the *desired output*, therefore is 0.5"? slight erratum perhaps?
@semtex6412
@semtex6412 8 месяцев назад
because otherwise, the cost (C) is 0.49, not 0.5
@hikmetdemir1032
@hikmetdemir1032 Год назад
what if the number of neurons in the layer is more than one
@scottk5083
@scottk5083 5 лет назад
Thank you!
@bubblesgrappling736
@bubblesgrappling736 4 года назад
nice video, im a little confused with which letters for whitch values - a = value from activation function / or just simply output from a ny given neuron? - C = loss/error gradient and which of these values qualify as the gradient?
@mikaellaine9490
@mikaellaine9490 4 года назад
a=activation (with or without activation function) C=loss/error/cost (these are all the same thing, the naming varies between textbooks and frameworks) WRT gradients: this is a 1-dimensional case for educational/amusement purposes. In actual networks, you would have more weights, therefore more dimensions and you would use the term 'gradient' or 'jacobian', depending on how you implement it etc. I have an example with two dimensions here: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-Bdrm-bOC5Ek.html
@FPChris
@FPChris 2 года назад
No one ever says when multiple layers and multiple outputs exist when the weights get adjusted do you do numerous forward passes after each individual weight is adjusted? Or do you update ALL the weights THEN do a single new forward pass.
@mikaellaine9490
@mikaellaine9490 2 года назад
Yeah, single forward pass (during which gradients get stored, see my other videos) followed by a single backpropagation pass through the entire network, updating all weights by a bit.
@FPChris
@FPChris 2 года назад
@@mikaellaine9490 Thanks. Much appreciated.
@dennistsai5348
@dennistsai5348 5 лет назад
Would you please talk about the situation with activate function(sigmoid)? It's a little bit confusing for me.. thanks a lots!
@mikaellaine9490
@mikaellaine9490 4 года назад
There is now a video about this: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-CoPl2xn2nmk.html
@vijayyarabolu9067
@vijayyarabolu9067 5 лет назад
Thanks Laine.
@bubblesgrappling736
@bubblesgrappling736 4 года назад
also, im not really able to find anywhere what delta signifies here, only stuff on the delta rule
@puppergump4117
@puppergump4117 2 года назад
If I had different amounts of neurons per layer, then would the formula at 11:30 be changed to (average of the activations of the last layer) * (average of the weights of the next layer) ... * (average cost of all outputs)?
@TheRainHarvester
@TheRainHarvester Год назад
From what i read, yes. But distributing the error canbe varied too.
@coxixx
@coxixx 5 лет назад
it wasn't for dummies.it was for scientist.
@mukonazwotheramafamba3549
@mukonazwotheramafamba3549 4 года назад
he is the dummy
@snippletrap
@snippletrap 4 года назад
It's for anyone who has taken calculus.
@rafaelramos6320
@rafaelramos6320 8 месяцев назад
Hi, a = i * w 1.5. * 2(a -y) = 4.5 * w - 1.5 What happened to the y?
@LaurentPrat
@LaurentPrat 6 месяцев назад
y is given = the target value, here = 0.5. => 1.5*2(1.2-0.5) = 2.1 which equal to 4.5*0.8-1.5
@xc5838
@xc5838 4 года назад
Can you please tell me how did you graph that cost function? I plotted this cost function in my calculator and I am getting a different polynomial. I graphed ((x*0.8)-0.5)**2 thanks.
@mikaellaine9490
@mikaellaine9490 4 года назад
Hi and thank you for your question. I've used Apple's Grapher for all the plots. It should look like in the video. Your expression ((x*0.8)-0.5)**2 is correct.
@maravilhasdobrasil4498
@maravilhasdobrasil4498 2 года назад
Maybe this is a dumb question, but how you go from 2(a-y) to 2a-1? (7:17)
@maravilhasdobrasil4498
@maravilhasdobrasil4498 2 года назад
oh, i got it. 0.5 is the desired input * 2 = 1
@SureshBabu-tb7vh
@SureshBabu-tb7vh 5 лет назад
Thank you
@vudathabhavishya9629
@vudathabhavishya9629 7 месяцев назад
Can anyone explain how to plot for 2(a-y),c=(a-y)2. i=1.5
@jameshopkins3541
@jameshopkins3541 4 года назад
what about code?
@theyonly7493
@theyonly7493 4 года назад
If all I want is: a = 0.5 with: a = i · w then: w = a / i = 0.3333 One simple division, no differential calculus, no gradient descent :-)
@mikaellaine9490
@mikaellaine9490 4 года назад
Brilliant. Now generalize that to any sized layer and any number of layers. I suppose you won't need bias units at all. You have just solved deep learning. Profit.
@user-th7gd7ge4p
@user-th7gd7ge4p Год назад
it was moreless comprehensible until the "mirrored 6" character appeared with no explanation of what it was, how it was called and why it was there. so let's move on to another video on backpropgagation...
@hakankosebas2085
@hakankosebas2085 3 года назад
what about 2d input
@mikaellaine9490
@mikaellaine9490 3 года назад
There is a video for 2d input: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-Bdrm-bOC5Ek.html
@shameelfaraz
@shameelfaraz 4 года назад
Suddenly, I feel depressed... around 8:20
@bettercalldelta
@bettercalldelta 2 года назад
4:36 me watching this in 8th grade: bruh
@k1b0rg_1
@k1b0rg_1 2 года назад
the word "probably" means that, there are people who don't know this (you as example).
@NavySturmGewehr
@NavySturmGewehr 2 года назад
During editing do you not notice how much you lip smack? Makes it so hard to listen to. Otherwise, thank you, the content is helpful.
@vast634
@vast634 3 года назад
7:04: 2(1.5 * w) -1 = 2(1.5 * 0.8) -1 = 1.4 not 1.5
@thamburus7332
@thamburus7332 5 месяцев назад
@dmitrikochubei3569
@dmitrikochubei3569 4 года назад
Thank you !
Далее
УДОЧКА ЗА 1$ VS 10$ VS 100$!
22:41
Просмотров 280 тыс.
The Most Important Algorithm in Machine Learning
40:08
Просмотров 415 тыс.
Neural Network Architectures & Deep Learning
9:09
Просмотров 787 тыс.
Another Simple Neural Network Backpropagated
16:47
Просмотров 5 тыс.
Backpropagation : Data Science Concepts
19:29
Просмотров 36 тыс.
Application of Calculus in Backpropagation
14:45
Просмотров 18 тыс.