Тёмный
No video :(

Neural Networks: Multi-Layer Perceptrons: Building a Brain From Layers of Neurons 

Jacob Schrum
Подписаться 18 тыс.
Просмотров 75 тыс.
50% 1

This video demonstrates how several perceptrons can be combined into a Multi-Layer Perceptron, a standard Neural Network model that can calculate non-linear decision boundaries and approximate arbitrary functions.

Опубликовано:

 

21 авг 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 28   
@ajith.studyingmtech.atbits1512
@ajith.studyingmtech.atbits1512 2 года назад
Very crisp simple explanation of neural network - you made a great foundation for me to build the skyscraper. Thanks a lot.
@Carlosdanielpuerto
@Carlosdanielpuerto 3 года назад
Incredibly clear explanation of the concepts! Thanks a lot
@vihaankadiyan9996
@vihaankadiyan9996 4 года назад
best content that i can find on internet regarding MlP
@robthorn3910
@robthorn3910 6 лет назад
I think it's worth mentioning that the magic of back propagation is the "chain rule".
@sahajshukla
@sahajshukla 5 лет назад
true, I totally agree. The gradient descent approach , as a whole is indeed very fascinating :)
@deadmoldable
@deadmoldable 4 года назад
21:59 this is the intuitive explanation for backprop i was looking for! if you know the weightchange to reduce the error, then simply do this change to the output of the predecessor instead of to the the weight. it's the same result. because of you know the calculus from the predecessor from the forward propagation, you can give the change through to the inputs of the predecessor. it's hard to explain, but i hope i got it.
@tusharsingh7438
@tusharsingh7438 4 года назад
This is the best video on MLP
@retskcirt69
@retskcirt69 3 года назад
Excellent video, very well explained
@hackein9435
@hackein9435 3 года назад
Finally I found it after 2 days of search
@SEOMEDIABOT
@SEOMEDIABOT 4 года назад
Well explained. Simple english
@saurabh1chhabra
@saurabh1chhabra 4 года назад
I'm happy to be the 10kth subscriber
@JacobSchrum
@JacobSchrum 4 года назад
🎉
@yasincoskun4400
@yasincoskun4400 2 года назад
Thanks a lot
@abdullhaseeb4157
@abdullhaseeb4157 3 года назад
how did you solve the h1 and h2 I couldn't get my head around that math, for the x in sigmoid which value of x did you use? 0 or 1? and also for the values of x and y what are the values
@JacobSchrum
@JacobSchrum 7 месяцев назад
Refer to my video on simple perceptrons: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-aiDv1NPdXvU.html
@debasismohanty7552
@debasismohanty7552 4 года назад
Can the neural network be used in stock data analysis
@turbolader6734
@turbolader6734 3 года назад
Best
@lisali6205
@lisali6205 2 года назад
genius
@sgrimm7346
@sgrimm7346 Год назад
Self-Organizing Multi Linear networks......No backprop, no calculus, no bias wt. and in most networks, no special activation function. Most of my networks that I build use this method.
@funlearninge-tech4392
@funlearninge-tech4392 4 года назад
come on backpropagation is not that complex just try to add it
@adhirajmajumder
@adhirajmajumder 4 года назад
You're just like Andrew ng lite ....
@FerMJy
@FerMJy 5 лет назад
22:42 you don't know or you truely believe what you are saying? because that's the most important part of the neural networks... if you don't know how to backprop you are doomed.... and i'm looking for 1 explanation where they explain how it works when the previous layer has more than 1 neuron...
@sahajshukla
@sahajshukla 5 лет назад
hi Fernando, I understand your anger. But the thing is, you have a set of labels already available to you since this is supervised learning algorithm. So, you can update the weights as Wnew = Wold+a(labels-y)x, where y is the obtained output and a is the learning rate. this is true for all the values of the weight on a certain node. I believe he didnt mention it because its quite a universal rule. You always use the gradient descent or delta learning rule. I hope this helps, cheers :)
@sahajshukla
@sahajshukla 5 лет назад
if the previous layer has more than one nodes, you take one output node and work on it like a separate adaline node. This can further be performed on the previous layer nodes too. Since the weight updates follow a backward path, it is called backpropagation :)
@FerMJy
@FerMJy 5 лет назад
@@sahajshukla no you don't... you have to derivate the calculation of the wighted sum...
@sahajshukla
@sahajshukla 5 лет назад
true thats the calculation part. You actually derivate the the error to update weights and biases. Thats exactly what i just said. It's called delta learning. The fact that weight updation goes from nth layer to n-1th layer and so on, upto the weights between the input layer and first hidden layer
@sahajshukla
@sahajshukla 5 лет назад
@@FerMJy The error actually follows a parabolic path for gradient descent. The equation is Error^2 = (y-xi)^2, which is a parabolic equation. So, in order to minimise this equation, you take the tangent to that point. This process itself, is called the gradient descent. You do it for all weights separately. Or, i tends from 1 to n
Далее
Perceptrons: The Building Blocks of Neural Networks
27:02
The Most Important Algorithm in Machine Learning
40:08
Просмотров 382 тыс.
The moment we stopped understanding AI [AlexNet]
17:38
Просмотров 934 тыс.
Watching Neural Networks Learn
25:28
Просмотров 1,2 млн
Why Neural Networks can learn (almost) anything
10:30
How Deep Neural Networks Work
24:38
Просмотров 1,5 млн
The Essential Main Ideas of Neural Networks
18:54
Просмотров 926 тыс.