Тёмный

What is Back Propagation 

IBM Technology
Подписаться 844 тыс.
Просмотров 58 тыс.
50% 1

Опубликовано:

 

8 сен 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 46   
@vencibushy
@vencibushy 7 месяцев назад
Back propagation to neural networks is what negative feedback is to closed loop systems. The understanding come pretty much naturally to the people which studied automation and control engineering. However - many articles tend to mix thing up. In this case back propagation and gradient descent. Back propagation is the process of passing the error back through the layers and using it to recalculate the weights. Gradient descent is the algorithm used for recalculation. There are other algorithms for recalculation of the weights.
@Kiera9000
@Kiera9000 Год назад
thanks for getting me through my exams cause the script from my professor helps literally nothing in understanding deep learning. Cheers mate
@ca1790
@ca1790 2 месяца назад
The gradient is passed backward using the chain rule from calculus. The gradient is just a multivariable form of the derivative. It is an actual numerical quantity for each "atomic" part of the network; usually a neuron's weights and bias.
@anant1870
@anant1870 Год назад
Thanks for this Great explanation MARK 😃
@Mary-ml5po
@Mary-ml5po Год назад
I can't get enough of you brilliant videos. Thank you for making what it seemed to me before as complicated easy to understand . Could you please post a video about loss functions and gradient decent?
@im-Anarchy
@im-Anarchy Год назад
What did he even taught actually?
@Zethuzzz
@Zethuzzz 5 месяцев назад
Remember the chain rule that you learned in high school.Well that’s what is used in Backpropogation
@RadiantNij
@RadiantNij 13 дней назад
Great work, so easy to understand
@hamidapremani6151
@hamidapremani6151 4 месяца назад
Brilliantly simplified explanation for a fairly complex topic. Thanks, Martin!
@hashemkadri3009
@hashemkadri3009 4 месяца назад
marvin u mean, smh
@sweealamak628
@sweealamak628 4 месяца назад
Thanks Mardnin!
@sakshammishra9232
@sakshammishra9232 Год назад
Lovely man..... excellent videos..all complexities eliminated. thanks a lot 😊
@pleasethink4789
@pleasethink4789 Год назад
Hi Marklin! Thank you for such a great explanation. (btw, I know your name is Martin. 😂 )
@KamleshSingh-um9jy
@KamleshSingh-um9jy 2 месяца назад
Excellent session ..thank you !!
@harrybellingham98
@harrybellingham98 7 дней назад
probably would have been good to describe that this is supervised learning as this would nit translate well for a beginner trying to apply this to other form of NNs
@msatyabhaskarasrinivasacha5874
@msatyabhaskarasrinivasacha5874 4 месяца назад
Awesome.....awesome superb explanation sir
@sahanseney134
@sahanseney134 2 месяца назад
cheers Marvin
@mr.wiksith5091
@mr.wiksith5091 17 дней назад
thank youu
@stefanfueger3487
@stefanfueger3487 Год назад
Wait ... the video is online for four hours ... and still no question how he manages to write mirrored?
@Aegon1995
@Aegon1995 Год назад
There’s a separate video for that
@itdataandprocessanalysis3202
🤦‍♂
@IBMTechnology
@IBMTechnology Год назад
Ha, that's so true. Here you go: ibm.biz/write-backwards
@tianhanipah9783
@tianhanipah9783 7 месяцев назад
Just flip the video horizontally
@rishidubey8745
@rishidubey8745 3 месяца назад
thanks marvin
@boeng9371
@boeng9371 7 месяцев назад
In IBM we trust ✊😔
@somethingdifferent1910
@somethingdifferent1910 Месяц назад
At 2:20 when he was talking about biases, does it have any relation with Hyperparameter or regularization unit?
@l_a_h797
@l_a_h797 4 месяца назад
5:36 Actually, convergence is does not necessarily mean the network is able to do its task reliably. It just means that its reliability has reached a plateau. We hope that the plateau is high, i.e. that the network does a good job of predicting the right outputs. For many applications, NNs are currently able to reach a good level of performance. But in general, what is optimal is not always very good. For example, a network with just 1 layer of 2 nodes is not going to be successful at handwriting recognition, even if its model converges.
@mateusz6190
@mateusz6190 4 месяца назад
Hi, you seem to have good knowledge on this, can I ask you a question please. Do you know if neural networks will be good for recognizing handwritten math expressions? (digits, operators, variables, all elements seperated to be recognized individually). I need a program that would do that and I tried a neural network, it is good for images from dataset but terrible for stuff from outside the dataset. Would you have any tips? I would be really greatful
@ashodapakian2788
@ashodapakian2788 4 месяца назад
Off topic: what drawing board setup do these IBM videos use ? it's really great.
@boyyang1290
@boyyang1290 4 месяца назад
I'd like to know, too.
@boyyang1290
@boyyang1290 4 месяца назад
I find it ,he is drawing on the Glass
@idobleicher
@idobleicher 5 месяцев назад
A great video!
@1955subraj
@1955subraj 10 месяцев назад
Very well explained 🎉
@neail5466
@neail5466 Год назад
Thank you for the information. Could you please tell if the the BP is only available and applicable for Supervised models, as we have to have a pre computed result to compare against!! Certainly, unsupervised models could also use this theoretically but does / could it effect in a positive way? Additionally how the comparison actually performed? Especially for the information that can't be quantised !
@Ellikka1
@Ellikka1 4 месяца назад
When doing the Loss Function hove is the "Correct" output given? Is it training data and the compared an other data file with desired outcomes? In the example of "Martin" how does the neural network get to know that your name was not Mark?
@guliyevshahriyar
@guliyevshahriyar Год назад
Thank you!
@rigbyb
@rigbyb Год назад
Great video! 😊
@jaffarbh
@jaffarbh Год назад
Isn't Back Propagation used to lower the computation needed to adjust the weights? I understand that doing so in a "forward" fashion is much more expensive than in a "backward" fashion?
@the1111011
@the1111011 Год назад
why you didn't explain how the network updates the weight
@mohslimani5716
@mohslimani5716 Год назад
Thanks still I need to understand how technically does it happen
@AnjaliSharma-dv5ke
@AnjaliSharma-dv5ke Год назад
It’s done by calculating the derivatives of the y hats with respect to the weights, and the function done backwards in the network applying the chain rule of calculus
@Justme-dk7vm
@Justme-dk7vm 4 месяца назад
ANY CHANCE TO GIVE 1000 LIKES ???😩
@tsvigo11_70
@tsvigo11_70 Месяц назад
A neural network cannot be connected by weights, this is nonsense. It can be connected by synapses, that is, by resistances. The way the network learns is incredibly tricky: not only does it have to remember the correct result, which is not easy in itself, but it has to continue to remember the correct result while remembering a new correct result. This is what distinguishes a neural network from a fishing net.
@gren509
@gren509 Месяц назад
Save yourself 8 minutes.. It's a FEEDBACK loop - FFS !
@vservicesvservices7095
@vservicesvservices7095 Месяц назад
Try to use more unexplained terminology to explain the terminology you try to explain is the source of confusion. 😂 Thumb down.
Далее
The Most Elite Chefs Ever!
00:35
Просмотров 3,4 млн
ПРИКОЛЫ НАД БРАТОМ #shorts
00:23
Просмотров 3,4 млн
The Most Important Algorithm in Machine Learning
40:08
Просмотров 410 тыс.
What are Generative AI models?
8:47
Просмотров 989 тыс.
What is LSTM (Long Short Term Memory)?
8:19
Просмотров 197 тыс.
Application of Calculus in Backpropagation
14:45
Просмотров 17 тыс.
Machine Learning vs Deep Learning
7:50
Просмотров 679 тыс.
27. Backpropagation: Find Partial Derivatives
52:38
Просмотров 58 тыс.
Training AI Models with Federated Learning
6:27
Просмотров 33 тыс.
The Most Elite Chefs Ever!
00:35
Просмотров 3,4 млн