Тёмный
No video :(

Implementation of AND function using Perceptron Model 

ThinkX Academy
Подписаться 16 тыс.
Просмотров 63 тыс.
50% 1

Machine Learning: • Machine Learning
|----------------------------------------------------------------
Android App(Notes+Videos): play.google.co...
Facebook: / thinkxacademy
Twitter: / thinkxacademy
Instagram: / thinkxacademy
#machinelearning #perceptron #ai #artificialintelligence #neuralnetworks

Опубликовано:

 

21 авг 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 75   
@ThinkXAcademy
@ThinkXAcademy 2 года назад
Important Correction: Activation Function for this video is defined as: if y(in)> 0 : then f(y(in)) is 1 if y(in) = 0: then f(y(in)) is 0 if y(in) < 0: then f(y(in)) is -1
@wfkpk
@wfkpk Год назад
​@@rohitchitte9155 he is defining the new activation function; he is not using the sigmoid function that we have seen in the other tutorials.
@jaykumarpatil4548
@jaykumarpatil4548 Год назад
Chapri explanation
@mango-strawberry
@mango-strawberry 4 месяца назад
​@@wfkpk Why not use sigmoid? Any reason for that?
@150kanduladinesh3
@150kanduladinesh3 Год назад
I think w_new=W_old+Learning_rate*(Expected value-Predicted value)*feature but u gave it directly expected value.
@mango-strawberry
@mango-strawberry 4 месяца назад
right it should be true - predicted
@fatemehiraf3998
@fatemehiraf3998 10 месяцев назад
that was the best teaching of perceptron that I have ever seen. thank you so much.
@wfkpk
@wfkpk Год назад
In 2nd iteration table you have written x1=1,x2=1,t=-1 but it should be x1=1,x2=1,t=1.
@RazaHussain96
@RazaHussain96 Год назад
why?
@slainiae
@slainiae Год назад
Yes, you're correct. It's just a small mistake he made though and everything else should still be valid.
@slainiae
@slainiae Год назад
@@RazaHussain96 Because it's an AND function which means x1=1 AND x2=1 should give t=1 and not t=-1.
@gqtshapha5640
@gqtshapha5640 2 года назад
Clear, informative explanation...I like it👍 I was wondering though, does the ordering of the input layer affect the overall turnout? In terms of shifts and overall adjustments
@ThinkXAcademy
@ThinkXAcademy 2 года назад
do you mean ordering of nodes within input layer?
@user-mx7sv3mo4i
@user-mx7sv3mo4i 3 месяца назад
Amazing explenation!!!!
@computervisionlab2119
@computervisionlab2119 3 года назад
Hi great video, I have a question: why you don't use backward propagation algorithms ? and sigmoid function is 1/(1+e^-x), did you round this function ?
@ThinkXAcademy
@ThinkXAcademy 3 года назад
Back Propagation algorithms are obviously useful but in this example it works without using this, but definitely for complex datasets we will require back propagation.
@learner8550
@learner8550 Год назад
What is formula of Linear activation function? For using here.....
@ahmarhussain8720
@ahmarhussain8720 3 года назад
awesome explanation
@ThinkXAcademy
@ThinkXAcademy 3 года назад
thanks✨
@TheIntervurt
@TheIntervurt 3 года назад
The formula for Δw is a(target - output)xi, in this video your formula is atxi ?
@jeetbhatt5986
@jeetbhatt5986 2 года назад
Same question!!
@rohan8758
@rohan8758 2 года назад
In 4th time of 1st iteration my w1=2, w2=2 & b(new)=-2. Please check it once, & correct me if i am wrong, Thank you
@ThinkXAcademy
@ThinkXAcademy 2 года назад
Yes correct.
@rajershimeesala2368
@rajershimeesala2368 3 года назад
thank you brooo the condition u used in this prob f(yin) is McCulloch Pitts bro
@ThinkXAcademy
@ThinkXAcademy 3 года назад
welcome🤗
@pranavarya2164
@pranavarya2164 3 месяца назад
why did u not use threshold
@kingofshorekishore
@kingofshorekishore 3 года назад
nicely explained.
@ThinkXAcademy
@ThinkXAcademy 3 года назад
Thank you😃
@balidinesh5687
@balidinesh5687 2 года назад
in the second iteration for input 1 the target is -1 but I guess it should be 1 and will our perceptron performs it right???
@ThinkXAcademy
@ThinkXAcademy 2 года назад
Try it that way if you are getting final values satisying the AND condition then it will be correct👍🏻
@MuhammadIrfan-nf9pb
@MuhammadIrfan-nf9pb 3 года назад
Hi bro I think In the first iteration and last 4 input the values of Delta w1, delta w2 ,delta b , w1 ,w2 and b are not correct. My values are 1, 1 ,-1 ,2 , 2 and -2 respectively comming plz Guide me
@ThinkXAcademy
@ThinkXAcademy 3 года назад
You might have made some mistake in calculations. Because it is correct.
@maheshh4414
@maheshh4414 Год назад
Hi sir, for the inputs 1,1 we should get 1 only right. In the first iteration if we update the weights earlier then we will get Y=1.
@srimannarayanaiyengar8914
@srimannarayanaiyengar8914 3 года назад
good explanation
@ThinkXAcademy
@ThinkXAcademy 3 года назад
Thank you sir😄
@anjalinair3006
@anjalinair3006 3 года назад
Thank you! It helped
@ThinkXAcademy
@ThinkXAcademy 3 года назад
Keep Learning👨🏻‍🏫
@rohan8758
@rohan8758 2 года назад
@7:25, you didn't tell about value of sigmoid function in perceptron learning algo. , not even y = f(yin), How can you assume that we know it already..
@ThinkXAcademy
@ThinkXAcademy 2 года назад
Because this video is part of the Machine Learning course and you skipped this tutorial of the course: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-ysQun8VbUmM.html
@ishag9787
@ishag9787 2 года назад
Sir , how will be the truth table for logical nor for bipolar inputs ? Is it done same way as binary just that we consider 0 to be -1 ?
@ArjunNarula1122
@ArjunNarula1122 3 года назад
Helped a lot, Thanks!
@ThinkXAcademy
@ThinkXAcademy 3 года назад
Share my videos to help my channel grow😊💯
@sshahidmalik97
@sshahidmalik97 3 года назад
sir, since u have used the activation fn which classifies the out either 0 or 1, so how did u get -1 as output (y)?
@ThinkXAcademy
@ThinkXAcademy 3 года назад
output of y will be passed to activation function for classification
@sshahidmalik97
@sshahidmalik97 3 года назад
Yep that's what i said.. After passing it to activation fn the result should be either 0 or 1 according to the activation fn which u have taken.. But u are getting - 1 for some input.. Which can't be possible... If your activation fn would have classified the output to either 1 or - 1 then it was ok.. But for this act. fn that is not the case..
@ThinkXAcademy
@ThinkXAcademy 3 года назад
it is correct i have rechecked the solution in the book...
@anelemadonda6191
@anelemadonda6191 2 года назад
@@ThinkXAcademy I think me and Shahid have the same question, your solution might be correct but the question here is "how" did you get to the solution, "how" did you end up with a -1, *how*? thanks for the video btw.
@ThinkXAcademy
@ThinkXAcademy 2 года назад
Activation Function for this video is defined as: if input > 0 : then output is 1 if input = 0: then output is 0 if input < 0: then output is -1
@karenngumimiiyortsuun4829
@karenngumimiiyortsuun4829 Год назад
Thanks man 👍
@ThinkXAcademy
@ThinkXAcademy Год назад
Thanks😄 Share it with other students to help others too💫
@palvaisaiprasanna4553
@palvaisaiprasanna4553 3 года назад
How to compute iteration 2 and what are the conditions
@ThinkXAcademy
@ThinkXAcademy 3 года назад
Compute the same way as we did in iteration 1 but take values of bias from first iteration
@radhasingh3549
@radhasingh3549 Год назад
value of alpha will be same for both the iterations?
@w.n.n.amadubashitha6751
@w.n.n.amadubashitha6751 3 месяца назад
yes
@smritidhurandhar232
@smritidhurandhar232 3 года назад
is it complsory to use formula to update the weights and bias? or can we assume it by our own
@ThinkXAcademy
@ThinkXAcademy 3 года назад
yes formula is needed to update the weights and bias.
@smritidhurandhar232
@smritidhurandhar232 3 года назад
@@ThinkXAcademy but in some videos and site i have seen that they have solved with using formula [ w;(new) = zv;(old) + ct.t:x; ]
@smritidhurandhar232
@smritidhurandhar232 3 года назад
@@ThinkXAcademy is using this formula [w;(new) = zv;(old) + ct.t:x; ]is compulsory or not?
@ThinkXAcademy
@ThinkXAcademy 3 года назад
it is important to use this formula because weight needs to be readjusted after each Iteration
@sasidharreddykatikam328
@sasidharreddykatikam328 Год назад
What is the terminology for ALPA and T in this perceptron Model
@abhishekdubey9920
@abhishekdubey9920 8 месяцев назад
How you write on screen its amazing pless tell.
@RyeCA
@RyeCA 2 года назад
How dou you construct the line using the weights and bias?
@ThinkXAcademy
@ThinkXAcademy 2 года назад
using the formula y = w.x + b
@RyeCA
@RyeCA 2 года назад
@@ThinkXAcademy perfect, thanks!
@shanmugamkavin8795
@shanmugamkavin8795 8 месяцев назад
how to interpret delta w
@abhishekdubey9920
@abhishekdubey9920 8 месяцев назад
Bro its change in wheight which we do get targeted output and reduce error during traning.
@Vipul__panwar
@Vipul__panwar 6 месяцев назад
why are u make it so complex 😵‍💫
@achaaisakyapodcast156
@achaaisakyapodcast156 2 года назад
Bhai jaldi jaldi bol liya kar
@ThinkXAcademy
@ThinkXAcademy 2 года назад
playback speed 1.5x pr krke dekhle
@achaaisakyapodcast156
@achaaisakyapodcast156 2 года назад
Wahi kiya bhai, par bata raha future k liye
@ThinkXAcademy
@ThinkXAcademy 2 года назад
haan meri new videos mei maine fast krdi h speed
@ThinkXAcademy
@ThinkXAcademy 2 года назад
btw thanks for feedback
@mango-strawberry
@mango-strawberry 2 месяца назад
@@ThinkXAcademy bro ek update formula mene ye bhi dekha tha: w new = w old + alpha * t * xi. Where t is either 1 or -1. Use 2 when the class should be above the line and use -1 when class should be below the line. Please reply bro if you see this, i have exam today
Далее
ЛИЗА - СПАСАТЕЛЬ😍😍😍
00:25
Просмотров 2,3 млн
The moment we stopped understanding AI [AlexNet]
17:38
Просмотров 934 тыс.
Neural Networks Representation in Machine Learning
21:58
2.2 AND gate using Perceptron
7:10
Просмотров 35 тыс.
Perceptron | Neural Networks
8:47
Просмотров 71 тыс.