Тёмный

Perceptron 

ritvikmath
Подписаться 166 тыс.
Просмотров 284 тыс.
50% 1

Intro to the perceptron algorithm in machine learning
My Patreon : www.patreon.co...

Опубликовано:

 

30 сен 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 311   
@abhivaryakumar3107
@abhivaryakumar3107 5 месяцев назад
Mans explaining with 3 markers and a chopstick and its clearer than any professor could explain with slides and computers and simulations Youre an amazing teacher Thank you so much
@shakirasunshinez
@shakirasunshinez 4 года назад
i wish my professor could explain this. idk what she does wasting hours of our time every Tuesday. i leave knowing less than this 13 minute video
@Randomkitten7
@Randomkitten7 4 года назад
EXACTLY
@Canonall
@Canonall Месяц назад
You either are simply lying or you are massively overestimating what you are able to do after watching this video. In University you learn this concepts in a formal, rigorous way. You learn the proofs, definitions, etc. Pretending this is in anyway a substitute for any university lesson is just plain dumb. It's a complement, for sure.
@AstonishingStudios
@AstonishingStudios 4 года назад
Quit reading the comments, boys, because this video will be all you need if you just focus on it. Great explanation!
@saisrisai9649
@saisrisai9649 3 года назад
I m a grl
@gouthamks1274
@gouthamks1274 3 года назад
@@saisrisai9649 😂 nice
@wyattaxton4085
@wyattaxton4085 3 года назад
i dont mean to be off topic but does any of you know of a trick to get back into an Instagram account..? I was stupid lost my login password. I love any tricks you can give me.
@stephenkelvin1220
@stephenkelvin1220 3 года назад
@Wyatt Axton instablaster =)
@wyattaxton4085
@wyattaxton4085 3 года назад
@Stephen Kelvin i really appreciate your reply. I got to the site through google and Im trying it out now. Seems to take a while so I will reply here later when my account password hopefully is recovered.
@alapparate8768
@alapparate8768 4 года назад
the only perceptron video I needed, btw I have exams tomorrow
@alimensah1153
@alimensah1153 4 года назад
The same here 😂😂
@skyboat345
@skyboat345 4 года назад
Me too lmao
@alapparate8768
@alapparate8768 4 года назад
@@skyboat345 all the best 😊
@skyboat345
@skyboat345 4 года назад
@@alapparate8768 Haha thanks man!
@Kosmaty205
@Kosmaty205 4 года назад
And how it went? :D
@Petershd138
@Petershd138 3 месяца назад
I confess, I didn't understand any single word of explanations from my MIT professor about perceptron. How ever, after I saw this video and understood clearly what the idea is. Thanks.
@sesmuel2593
@sesmuel2593 3 месяца назад
How are you at MIT, I cannot even find a way so sign up on the website.
@jestivalv
@jestivalv 3 года назад
Incredible how an explanation with a piece of paper can be more effective than any other digital methods. Thanks
@ritvikmath
@ritvikmath 3 года назад
Glad it was helpful!
@gerben880
@gerben880 5 лет назад
yo this η is an eta, not a nu lol
@ritvikmath
@ritvikmath 5 лет назад
Oops haha
@mikepeyton6720
@mikepeyton6720 4 года назад
Okay Sig Nu pledge
@shelleyzhang5910
@shelleyzhang5910 2 года назад
You're really a talented teacher! I love the way you walk us through such abstract concepts. I was struggling with my HW for a whole day until I saw your videos. Keep producing more brilliant content! Subscribed!
@AlexLaird7
@AlexLaird7 4 года назад
Amazing clear, and concise explanation! Thank you so much, I'm trying to program a perceptron neural network in Go and this really helps
@ticoticox3000
@ticoticox3000 4 года назад
Isnt it gradient descent?
@fnyaung
@fnyaung 3 года назад
wow great video! Way easier than how my professor explains it! My prof uses terminologies, which he doesn't explain, and it gets confusing at times.
@ritvikmath
@ritvikmath 3 года назад
Glad it helped!
@mayankdubey9027
@mayankdubey9027 2 года назад
I’ve been trying to understand perceptron for a couple days now. Nothing really clicked until this video. 10/10 explanation, you are better than majority of the professors and teachers out there.
@Amar-hl6pu
@Amar-hl6pu Год назад
Same ! Been trying for 2 days and most videos don't provide a detailed mathematical explanation and just throw the formulas at you. This is by far the best explanation of a Perceptron on RU-vid.
@aryamananand100
@aryamananand100 7 месяцев назад
Hi, amazing video! Quick question: Is the learning rate represented by the Greek letter "eta" or "nu"? Because your drawing ressembles "eta", and you wrote "nu" beside it. Thanks
@sawubonalanguagelearning
@sawubonalanguagelearning 5 месяцев назад
i think he meant to say 'eta' because the learning rate is either represented by eta η or by alpha α. Nu (ν) is a different letter altogether
@hbeing3
@hbeing3 3 года назад
thank you! Well explained. A small question about the symbol of learning rate. Is it called 'Eta'? 'Nu' is the v shape one?
@ritvikmath
@ritvikmath 3 года назад
yup you're right, I clearly need to brush up on my Greek alphabet
@andrempunkt
@andrempunkt 6 месяцев назад
0*1 + 1*x1 + 5*x2 > 0 1*x1 + 5*x2 > 0 x2 > - 1*x1 x2 > -(1*x1)/5 x2 > -0.2*x1 Is that corrct or did I make a mistake?
@carlhelin8744
@carlhelin8744 Год назад
random guy with a chopstick and marker better than most professors at my uni
@lastairbender_883
@lastairbender_883 Год назад
07:29 By far the best use of a disposable chopstick in a ML video. Well done!
@senzmaki
@senzmaki Год назад
chopstick Meta
@smsm8686
@smsm8686 3 года назад
such a clear explanation! you're a natural at teaching and machine learning. thanks so much
@ritvikmath
@ritvikmath 3 года назад
You're very welcome!
@skndshaum
@skndshaum 4 года назад
How you get x1 = 2 ; and x2 = -2 ?
@etion1999
@etion1999 3 года назад
8:35
@maxton550
@maxton550 4 года назад
Thank you man , my machine learning class is a mess .
@charlyn.1219
@charlyn.1219 3 года назад
Thank you for the video! It is well explained. Some lecturers just have a passion for making easy things difficult.
@ritvikmath
@ritvikmath 3 года назад
You are welcome!
@toshal22
@toshal22 2 года назад
You assigned the omega values for the question but how did you initially assign their values and why did you assign them as 0,1,.5 why not something else ?
@tharteon1866
@tharteon1866 Год назад
whats the answer to this?
@maska59
@maska59 3 года назад
Great video, you pretty much have explained the whole perceptron problem in 15 minutes while I was trying to figure it out after taking 4 hours of uni classes.
@nicolaspazmino6477
@nicolaspazmino6477 5 лет назад
Thank you for taking the time to make these videos and share your knowledge, your other videos deserve more views.
@galcesana8603
@galcesana8603 3 месяца назад
great video. thanks !
@nailashah6918
@nailashah6918 3 года назад
It would be better for the beginners if u speak slowly always deliver ur lecture while speaking slowly
@The11HD
@The11HD 3 года назад
playback speed, 0.75 ;)
@Thelee4music
@Thelee4music 4 года назад
at 6:15, from where this equation derived X2 > -2X1
@rackzz7084
@rackzz7084 4 года назад
it's clearly wrong and should be x2 > -1/5 x1
@satyapratik
@satyapratik 4 года назад
X1+0.5(X2)>0 0.5(X2) > -X1 X2> -2(X1) You should re do your elementary schooling.
@rackzz7084
@rackzz7084 4 года назад
@@satyapratik dude the 0.5 in the video looks like a 5
@gugankathiresan4721
@gugankathiresan4721 4 года назад
@@rackzz7084 ikr
@buyanimhlongo2414
@buyanimhlongo2414 4 года назад
You're correct his answer is wrong,After a Dot Product the answer is :X1+0.5X2
@blauwecavia1583
@blauwecavia1583 4 года назад
So what if 2 points are incorrectly seperated? Do I just pick one at random to update?
@mikaeltorp-holtedostanic2058
@mikaeltorp-holtedostanic2058 4 года назад
5:46 How is 1 + x1 + 5x2 > 0 simplified to x2 > -2x1 ?? They are two very different expressions.
@prakashd842
@prakashd842 4 года назад
Hi Brother, It is not 5X2 . That is 0.5 and not 5. In the video it is not showing properly. Here is the calculation: 0 * 1 + 1 * X1 + 0.5 * X2 = 0 , Step 2: X1 + 0.5 X2 = 0 Step 3: 0.5 X2 = - X1 Step 4: X2 = - 1/ 0.5 X1 Step5 : X2 = -2 X1 Step 6: -2 is less than 1 and hence X2 > - 2 X1
@ritvikmath
@ritvikmath 4 года назад
Thank you for addressing this question! Indeed the decimal point is unfortunately a bit hidden in the video.
@mikaeltorp-holtedostanic2058
@mikaeltorp-holtedostanic2058 4 года назад
@@ritvikmath Oh, I see. Thanks, and thanks to Prakash D also for clarifying.
@CodeKiller-ll
@CodeKiller-ll 2 месяца назад
one of the best ml video
@blitz8229
@blitz8229 Год назад
Cool! Thank you! ❤
@TheDominomi
@TheDominomi 8 месяцев назад
This video just solved one assignment I had from university. Good explanation, thanks
@SiriJustDoIt
@SiriJustDoIt 3 года назад
You could have made it better by differentiating as a symbol( sample) and axis ... Symbol could be a circle and triangle rather x and triangle ... the word x interfering with the word x1 and x2 . Otherwise consciously need to remind yourself x and (x1, x2) are different
@if8172
@if8172 Год назад
Need teachers like this guy teaching like we're all bloody five instead of my lecturer who decides to fill the first half of the powerpoint slides with biological neuronal networks ffs
@shankarc6321
@shankarc6321 23 дня назад
This is amazing, I've been scratching my head with lecture videos and notes.. going nuts.. but this explanation is simply super... I have been looking for this to get the concept and fundamentals. Love it. Please keep posting more and more..
@weigthcut
@weigthcut 2 месяца назад
Still wondering how you convert "0*1 + 1*X1 + 0,5*X2 >0" to "X2>-2X1". If I let ChatGPT draw the line it looks completely different.
@dianaayt
@dianaayt 3 месяца назад
8:47 but what about when you have more than 1 moitake and some should be in the upper and some in the lower?
@mjt_00
@mjt_00 10 месяцев назад
REALLY GOOD VIDEO. Thank you so much, why can't our teachers at university explain like this.
@dragster100
@dragster100 10 месяцев назад
Excellent video! But could somebody help me with my questions please? 1. At 5:32, how did you know the equation is bigger than zero since x1 and x2 are unknown values? 2. If let's say we have instead of 1, we have 2 misclassified points. So shall we calculate two w prime? If so, how do we use the two different w prime to get the hyperplane?
@prateekyadav9811
@prateekyadav9811 3 месяца назад
Hey Ritvik, thanks for this video! I was wondering how will the parameters be updated if we have multiple points that are wrongly classified?
@ericsims3368
@ericsims3368 6 месяцев назад
Super helpful explanation. Thank you! Is the updating the same as back propagation, or is that only with multiple layers?
@SuperMtheory
@SuperMtheory 5 лет назад
How is the initial w vector established? That is, why did you use w = ( 0, 1, 0.5) ?
@c0t556
@c0t556 5 лет назад
It doesn’t matter. Can be a random vector. It will converge later.
@sajulali
@sajulali 3 года назад
Wow...never had anybody explain things so clear...just loved and learnt a lot about perceptron!! Thank you!!
@prateekyadav9811
@prateekyadav9811 3 месяца назад
So help me understand this, anyone. Lets suppose I want the perceptron to classify spam and not spam mail. Then, by a point (on the plot) do we mean a single email? Also, does the update of parameters happen after each mail is classified or after the entire training set of emails is classified? From the video, I get the impression that update happens only after all emails are classified. How do we update params if there are multiple wrongly classified points?
@dystopiaproductions9869
@dystopiaproductions9869 2 года назад
Took me a while to understand which greek letter "nu" is. It's the greek letter "eta". You may see in some programs that the learning rate is called eta instead.
@misnik1986
@misnik1986 10 месяцев назад
Thank you very much for the insightful video, very well explained, just a nitpicking remark, the greek letter you showed at the beginning is eta and not nu. nu looks like a v
@crabjuice47
@crabjuice47 3 года назад
Maybe not chose one of the symbols on the graph as a cross so it doesn't further confuse someone with all the x. And x1 and x2 are harder to distinguish then x and y.
@yyaa2539
@yyaa2539 2 года назад
Thanks for the video. Two remarks: 1. It is not clear if the algorithm converges to a solution. 2. mistake at 10:15-10:34: the new line found by the algorithm does not pass throughout the origin...
@Acampandoconfrikis
@Acampandoconfrikis 3 года назад
thanks brah
@saulesha123
@saulesha123 3 года назад
1 million thanks
@supriyamanna715
@supriyamanna715 2 года назад
silly qn here, why you chose >0 as X and
@trungchotim1559
@trungchotim1559 3 года назад
Thank you for such a clarify video. I'm a bit confused and hope you might explain it. Does the final hyperplane go through the origin (0,0). If it does not, how could we make it go through the origin? Thank you.
@ritvikmath
@ritvikmath 3 года назад
the final hyperplane will pass through the origin if the first coefficient (w_0) is equal to 0.
@TLabsLLC-AI-Development
@TLabsLLC-AI-Development Год назад
"Linearly Seperable" is the most pretentious nerdy terminology I've heard in years.
@carlmachaalany4328
@carlmachaalany4328 11 месяцев назад
something about the chopstick makes me understand your explanation better
@AzAzMusic
@AzAzMusic 4 месяца назад
bro its crazy how ive been learning this in class for the last 2 weeks and I didn't understood them until now
@diamondcutterandf598
@diamondcutterandf598 Год назад
8:49 how do you define upper and lower? If the line is mirrored, is the upper region still the 'right' side of the line, or the 'left' side
@mango-strawberry
@mango-strawberry 6 месяцев назад
i wanted to ask one question. isn't d= target value- expected value?
@AndrewT
@AndrewT 3 месяца назад
Is the change of coordinates from cartesian to polar related to the so called 'kernel trick'?
@vishnuvardhan8495
@vishnuvardhan8495 2 года назад
How would we know if the data is linearly separable or not before we use this algorithm. And also how would we know if we are supposed to use polar co-ordinates ? we should know beforehand then that the data can be separated through a circle. not a good video though
@ronald3249
@ronald3249 Год назад
The learning rate is NOT a parameter. It is a hyperparameter. Please be careful in the terminology.
@Rajivrocks-Ltd.
@Rajivrocks-Ltd. Год назад
So how would you know to use a perceptron if you were working with a high dimensionality? Not like you can plot it all.
@shacharh5470
@shacharh5470 2 года назад
Would polar coordinates work if the circle isn't centered around 0,0?
@H1K8T95
@H1K8T95 2 года назад
Ritvik, I saw the thumbnail and title and thought I was heading into a Bill Wurtz video 😂
@atouloupas
@atouloupas Год назад
Great video! Small correction, η (the learning rate) is the letter "eta", not "nu"
@alitahsili
@alitahsili 2 года назад
What if we had two or more misclassified points? How would the w(i) be updated?
@adama7654
@adama7654 6 месяцев назад
Very good video. The learning rate is the greek letter "eta", the letter "nu" looks more like a v.
@rakshashet7813
@rakshashet7813 3 месяца назад
Really great explanation! Loved it!😍
@13ciaran13000
@13ciaran13000 Год назад
..........That "ν" you wrote at 2:30 looks very "η"-like 🤨
@zakozakaria
@zakozakaria 8 месяцев назад
converting data to polar coordinates is just beautiful
@nang88
@nang88 4 года назад
tha goat
@stevengusenius7333
@stevengusenius7333 3 года назад
Thanks, i haven't seen this one before. If there are multiple lines that will separate the classes, is the line selected by perceptron optimal in anyway, or does it simply stop when any viable solution is found? If there is overlap between the classes, will this still work? Assuming there is no one line that separates the classes with 100% accuracy, will iterating get you a "better" solution?
@dumb_as_rocks
@dumb_as_rocks 4 года назад
most aesthetic handwriting i've ever seen
@TT__007
@TT__007 Месяц назад
How you got x0 , x1,x2, values at (09:32)
@innerpeace41948
@innerpeace41948 3 года назад
So perceptron in machine is the same thing with neuroplasticity in humans?
@pravingaikwad1337
@pravingaikwad1337 Год назад
how do we select the value of d ("upper or lower") in higher dimensions
@krithikavenkatesan2123
@krithikavenkatesan2123 4 года назад
Great video! I'd like to know how the addition of another element to the number of dimensions, gives us a constant number? How does just the no. of Dimensions refuse to provide a constant number?
@danialmoghaddam8698
@danialmoghaddam8698 2 года назад
What if dataset 8s separable only by a curve not a circle.
@rshinra
@rshinra 2 года назад
I think that's an Eta, not a nu, but that's a GREAT explanation!
@jordiwang
@jordiwang Год назад
why would you called it as weight zero if it's literally the bias
@FlubJumper
@FlubJumper 3 года назад
Very good video,thank u very much!
@seyedalirezaabbasi
@seyedalirezaabbasi 3 года назад
عالی بود. دمت گرم.
@ritvikmath
@ritvikmath 3 года назад
thanks!
@khimleshgajuddhur6892
@khimleshgajuddhur6892 3 года назад
what if my data has a triangular shape? then what ML algorithm should I use? | | xxxx xxxx xxx | xxxxxxxxxxx | xxx xxxxxx | xxxxxxxx | xxxxxx | xxxx | xx |____________________________________________ the Xs' represents points on the graph
@ΣτέφανοςΒκς
@ΣτέφανοςΒκς 10 месяцев назад
It helped me my friend, really appreciate that. :)
@joshuamark5907
@joshuamark5907 2 года назад
Great video; the nu at the beginning though... that's eta not nu
@NoobehPvP
@NoobehPvP 6 месяцев назад
Great explanation but does bro not have a ruler lol
@vagg_real_4537
@vagg_real_4537 4 месяца назад
excellent video! thank you
@Cobyboss12345
@Cobyboss12345 Год назад
you are so amazing you are reading 3 years into the future
@suvrotica
@suvrotica 3 года назад
Just amazing. Please keep up the good work, it is this kind of clarity that makes people like topics like this.
@ian-haggerty
@ian-haggerty 5 месяцев назад
@lamtaolam2802
@lamtaolam2802 3 года назад
Thank you sir!
@ritvikmath
@ritvikmath 3 года назад
You are welcome!
@samerkeyrouz3160
@samerkeyrouz3160 5 месяцев назад
Great video straight to the point thanks
@boardpassenger1483
@boardpassenger1483 2 месяца назад
??? Writing an eta and reading it nu???
@parthmalik1
@parthmalik1 11 месяцев назад
can please start uploading these pics
@abdullahharis1790
@abdullahharis1790 3 года назад
Thank you so much
@ritvikmath
@ritvikmath 3 года назад
You're most welcome
@r.bhargavram3546
@r.bhargavram3546 3 года назад
Why we need to do draw the line to seperate different classes
@girishtripathy275
@girishtripathy275 3 года назад
Learning rate isn't a parameter for a perception.
@dennisgavrilenko
@dennisgavrilenko 7 месяцев назад
Not the hero we deserved, but the hero we needed
@Hamromerochannel
@Hamromerochannel Год назад
This is one of the best best best channel for learning ML
@ritvikmath
@ritvikmath Год назад
Thanks!
@gammafirion9162
@gammafirion9162 3 года назад
Pretty sure learning rate variable is eta, not nu.
@vanajaparamesh4752
@vanajaparamesh4752 4 года назад
Very useful for my exams..... Thanks a lot sir
@mohankumargajendran528
@mohankumargajendran528 4 года назад
its called "eta" not "nu" I believe
Далее
Perceptron | Neural Networks
8:47
Просмотров 76 тыс.
The Most Important Algorithm in Machine Learning
40:08
Просмотров 442 тыс.
But what is a convolution?
23:01
Просмотров 2,6 млн
Machine Learning Fundamentals: Bias and Variance
6:36
Decision Tree Classification Clearly Explained!
10:33
Просмотров 672 тыс.
Perceptrons: The Building Blocks of Neural Networks
27:02