Тёмный

Perceptron Loss Function | Hinge Loss | Binary Cross Entropy | Sigmoid Function 

CampusX
Подписаться 244 тыс.
Просмотров 83 тыс.
50% 1

Опубликовано:

 

4 окт 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 117   
@RamendraMishra-go9sc
@RamendraMishra-go9sc 5 месяцев назад
i m blessed to be born in such day and age where people like you and internet exists, thank you so much
@paragvachhani4643
@paragvachhani4643 2 года назад
First of its awesome classic lecture.. We think that its just 1 hours video so why sir not upload 2 3 video in a day... But baddies its spoon feeding video... To made 1 hour video sir gave his almost 10 to 12 hours so we can easily understand each and every concept... Thank u so much sir for create this leacture series..... And one humble request to every viewer if this video improve your knowledge from a to b then pls pls pls like and share all the video....if u r generous..
@messi0510
@messi0510 Год назад
1:30-->10:35 Perceptron trick summary 21:45-->24:30 Perceptron Loss Function 37:35-->38:55 Geometric intuition of Perceptron's Loss function 39:10-->44:25 using Gradient descent 55:15-->58:20 Perceptron is flexible
@krisnareddy2977
@krisnareddy2977 8 месяцев назад
39:10-->44:25 It is Stochastic Gradient descent
@priyamtiwari4805
@priyamtiwari4805 2 года назад
whenever you will be free, please try to upload videos on consecutive days rather than the 1-day gap. Because we can't wait for soo much😁 to see such amazing content.
@AshwinSingh-pw7wv
@AshwinSingh-pw7wv 3 месяца назад
meat riding is crazy
@indiann871
@indiann871 2 месяца назад
Can't believe this quality content is available for free. Thank you so much sir.
@avishinde2929
@avishinde2929 2 года назад
your way of teaching is very beautiful sir i have understood much better how work loss function thank you so much sir
@sam-mv6vj
@sam-mv6vj 2 года назад
sir completed your NLP series yes it was awesome and now completed deep learning videos from now ill be up to date, and thank you so much for this series
@near_.
@near_. Год назад
Hey i have a query is it sufficient to read for industrial purpose ? Please update whatever extra is needed ☺️😊
@akashprabhakar6353
@akashprabhakar6353 2 месяца назад
Awesome video. Never ever thought Perceptron is a big topic. Thanks for in depth explanations.
@prasantkumardash5744
@prasantkumardash5744 2 года назад
There is a mistake in the weight update rule.Originally it was correct. Instead of w1=w1 + n*(del L/del w1) it should be w1=w1 - n(del L/del w1)...similarly for w2 and b.. in the program the formula is correct as we use the actual value (-yx)...so the formula becomes positive in the program...But in the mathematical formula, it should be rectified to - instead of +...Please look into it.Thank you.
@tr-GoodVibes
@tr-GoodVibes Год назад
Thanks for this comment. I was also thinking the same. Now i am sure.
@pratik6649
@pratik6649 Год назад
Right brother
@soumyajitpal8273
@soumyajitpal8273 11 месяцев назад
Thank you for echoing my thoughts.
@gamersgame43
@gamersgame43 10 месяцев назад
This is where I got confused and again assured
@MG-jo3ww
@MG-jo3ww 27 дней назад
7쇼 1:37
@juhildungrani3594
@juhildungrani3594 6 месяцев назад
Till date best playlist on youtube for deep learning
@CodeDynamo
@CodeDynamo 3 месяца назад
One of the best teachers I've ever seen. ❤ His way of teacher Wow. He just makes everything easy to understand and apply.
@InfoTunnel
@InfoTunnel Год назад
You are professional and I advise you to continue make videos
@ashishprasadverma9428
@ashishprasadverma9428 2 года назад
Thankyou sir ,wonderful content,easy to uderstand
@paragbharadia2895
@paragbharadia2895 Месяц назад
finally i am able to watch a better versions of webseries as long as i want.!
@155_manishkumar2
@155_manishkumar2 28 дней назад
dude, this lecture is too hard, but your explaination and 100 days of machine learning , help me to understand this
@hitanshramtani9076
@hitanshramtani9076 12 дней назад
hii, can you tel me the diffrence in logistic and percepton
@155_manishkumar2
@155_manishkumar2 12 дней назад
@@hitanshramtani9076 if a perceptron uses activation function as sigmoid and loss function as binary cross entropy then it's same other wise different
@hitanshramtani9076
@hitanshramtani9076 12 дней назад
@@155_manishkumar2 yaa dude thanks ,he explained at last , as from past video i was getting this dout now its cleared
@saptarshisanyal6738
@saptarshisanyal6738 29 дней назад
For those who think that weight/bias formula is wrong: The typical weight update rule for gradient descent involves subtracting the gradient of the loss function with respect to the weight. However, in the perceptron algorithm, the weight update isn't derived from the gradient of a loss function like in other learning algorithms (e.g., logistic regression). Instead, it's a direct rule based on misclassified points. The update happens only when a misclassification occurs (z * y[i] < 0).
@barunbodhak9841
@barunbodhak9841 2 месяца назад
Srikanth varma sir from applied ai course had the same flair in teaching. I always wanted to revise the concepts but wasnt able to since the course didnt allow download. Now i dont have to search for it. Thanks agaim brother beautiful way of teaching
@ajaykuchhadiya5812
@ajaykuchhadiya5812 Год назад
thank you so much, this playlist is really helpful please complete the playlist🙏
@Lakshya-f4l
@Lakshya-f4l 4 месяца назад
Great Teacher Great Explanation Great Content
@kindaeasy9797
@kindaeasy9797 3 месяца назад
Log loss mai dono parts -ve sign mai hota hai and a summation, thanks for this awesome lecture
@abhisheksaurav
@abhisheksaurav 4 месяца назад
31:30 sir ji raat ke 1 baje headphone laga ke padh rahe the, goti muhh me aa gaya gate khulne ki awaz pe
@campusx-official
@campusx-official 4 месяца назад
Haha
@AkashRusiya
@AkashRusiya 2 месяца назад
@Abhisheksaurav - Same thing happened with me today. The timing, the headphones, literally everything. I was thinking of commenting this but didn't. Saw your comment and it made me chuckle. XD
@shaileshpendam-m5y
@shaileshpendam-m5y 2 месяца назад
achha ye darwaja idhar khula kya...? mai pareshan hu sab darwaje band hai ye awaaj kaha se aaya krk...haha
@ShubhamSharma-gs9pt
@ShubhamSharma-gs9pt 2 года назад
thanks for the playlist sir!!!
@vinaynaik953
@vinaynaik953 2 года назад
Tussi great ho paaji
@pratikmahadik3675
@pratikmahadik3675 2 года назад
Great session sir!!!
@parthbansal7005
@parthbansal7005 2 года назад
very well explained content
@ParthivShah
@ParthivShah 5 месяцев назад
Thank You Sir.
@ManasNandMohan
@ManasNandMohan 4 месяца назад
Clearing Concept purely
@sachinsingh1163
@sachinsingh1163 7 месяцев назад
Very nice and amazing explanation bhaiya
@jayantsharma2267
@jayantsharma2267 2 года назад
great content sir
@rb4754
@rb4754 4 месяца назад
very well explained
@namanmodi7536
@namanmodi7536 2 года назад
best video
@PavanTripathi-rj7bd
@PavanTripathi-rj7bd 4 месяца назад
Thanks a lot for the lecture!
@illusions8101
@illusions8101 Год назад
Thank you so much sir ji ☺️
@Aman-o9v
@Aman-o9v Месяц назад
easy way loss function 😮 Imagine you're learning to play basketball, and your goal is to make as many baskets as possible. Each time you shoot the ball, you either make the basket (success) or miss it (failure). Now, think of the loss function as a way to measure how far you are from your goal.
@sujithsaikalakonda4863
@sujithsaikalakonda4863 Год назад
Great content sir.
@md_shagaf_raiyan_rashid
@md_shagaf_raiyan_rashid 8 месяцев назад
Sir tenserflow aur pytoarch aur keyras ka use karke nahi ho sakta sir
@jayantsharma2267
@jayantsharma2267 2 года назад
sir stat quest ki trha aap jo bhi videos ka reference dete ho jo 100 days of ML mai hai ,to please purani videos ke link bhi daal diya karo taki easy to navigate ho sake
@saman__fatima
@saman__fatima 7 месяцев назад
thanks for this effort
@StartGenAI
@StartGenAI 2 года назад
Nice video 🙏
@NickMaverick4
@NickMaverick4 2 месяца назад
AMAZING ❤❤❤
@alastormoody1282
@alastormoody1282 6 месяцев назад
Watching whole add
@KumR
@KumR 7 месяцев назад
This was Deep
@elonmusk4267
@elonmusk4267 3 месяца назад
ultra legend
@rutujadindore7043
@rutujadindore7043 Год назад
Great concept
@ARYANTIWARI-y5t
@ARYANTIWARI-y5t 2 месяца назад
00:03 The video discusses the perceptron loss function and the problems with the perceptron trick. 02:44 Perceptron uses a simple algorithm for binary classification 07:48 Perceptron trick may not always result in a straight line 10:20 Understanding the concept of Loss Function in machine learning 16:05 The Perceptron Loss Function and its drawbacks 18:36 Perceptron loss function uses raw data for classification. 23:52 Explanation of perceptron loss function and hinge loss 26:30 Minimizing loss function using gradient descent 31:25 Explaining the geometric intuition of the loss function 33:44 Understanding the impact of different points on the launch function 38:34 Understanding the optimization algorithm for finding minimum values 40:54 Learning update rules for parameters using derivatives 46:07 Understanding the mathematical model of Perceptron 48:57 Logistic Regression and Perceptron are the same with different activation functions 54:50 Understanding different activation functions and loss functions 57:40 Perceptron is flexible for regression and classification.
@anuragrajora9993
@anuragrajora9993 2 месяца назад
Please tell me what is bias i can not understand it please make a video on bias weights
@umarzafar7759
@umarzafar7759 4 месяца назад
Really helpful
@nakulmali1413
@nakulmali1413 2 года назад
Hi sir first upon thanks for your videos and your effort. Sir i request whare ver you write on screen please upload it in PDF form thats why students get your teaching notes for study.
@sandipansarkar9211
@sandipansarkar9211 2 года назад
finished watching
@zkhan2023
@zkhan2023 2 года назад
Thanks sir
@himanshuchoudhary8424
@himanshuchoudhary8424 2 года назад
1.Why do we need to add a bias 'b' to the line equation ? 2.Also, are we using this sklearn Loss function over the loss function which calculate distance because we don't want the contribution of rightly predicted points in loss ?
@mdaalishanraza3928
@mdaalishanraza3928 2 года назад
im guessing we need b for better transformation(moving the classification line)
@shuklaparth2619
@shuklaparth2619 2 года назад
Sir telegram group please
@ganeshreddykomitireddy5128
@ganeshreddykomitireddy5128 9 месяцев назад
All these loss functions are detailly explained in the 100 days of machine learning
@ashuroy1533
@ashuroy1533 2 года назад
First❤️
@mithkeshorts
@mithkeshorts 2 месяца назад
31.31(approx) par jo Darwaje ki aawaz aaai .. bhai sab . manjulika yaad aa gayi
@cse-25-aniketguchhait35
@cse-25-aniketguchhait35 Год назад
perceptron ka code galat hai bhaiya.. output 1 and 0 aa raha hai.. par woh 1 and -1 ana chaihai tha .. aap ne jo code likha hai for loop ke aandar if condition y(i)=0 ke liye check hi nahi hoga... isiliye sab 0 ke -1 kijiye first.
@The_Soul2
@The_Soul2 7 месяцев назад
only one correction , y in the code should have value only -1 ,1
@Shivam_kgp1
@Shivam_kgp1 11 месяцев назад
at 44:47 , I ponder that you have mistaken ,because you have changed the sign and that is not true sir and one more thing ,In the code you used + sign because of it's derivative sign not for the gradient descent update rules ,please correct me if I'm wrong
@SS-yb1qd
@SS-yb1qd 2 года назад
Wow
@reevasharma9714
@reevasharma9714 Год назад
awesome..bestttt...
@piyushpathak7311
@piyushpathak7311 2 года назад
Sir what is sparse categorical entropy plz discuss it and when to use it..
@MAPS-1297
@MAPS-1297 2 года назад
sparse categorical is used when you have multiclass target variable and categorical crossentropy is also used when you have multi calss target variable but the thing is you need to use get dummies on your target variable thn you can use categorical crossentropy
@barunkaushik7015
@barunkaushik7015 2 года назад
Brilliant
@AlayaKiDuniya
@AlayaKiDuniya Год назад
Nitish sir image is preprocessing k foran bad hum images ko spilt kr skty hain train and test me aur us k bd data generator k through augmentations? JazakhAllah. Respect from pakistan
@AjitKumarMCS
@AjitKumarMCS Год назад
Sir! please make one vedio for neural architecture search
@amitattafe
@amitattafe 2 года назад
Dear sir, I run the code with learning rate at 0.01, the value of b is stuck at 1.23 while if learning rate is increased by 10 fold that is your default (0.1) the value of b = 1.3000. So what might be the reason for this so at lower value of learning rate the solution is not converging while for higher learning rate we get faster convergence.
@SaifAbbas-c9p
@SaifAbbas-c9p 6 месяцев назад
nice
@kuldeepsingh1121
@kuldeepsingh1121 Год назад
Hello Friends, please Explain Below code x_input = np.linspace(-3,3,100) y_input = m*x_input + c What is the purpose of x_input, and y_input
@himanshuchoudhary8424
@himanshuchoudhary8424 2 года назад
What factors determine the value of epoch ?
@IndustrialAI
@IndustrialAI 11 месяцев назад
Requesting you to share your handwritten notes
@faheemfbr9156
@faheemfbr9156 Год назад
Great video sir.. Hats off to you🥰❣
@farhadkhan3893
@farhadkhan3893 Год назад
Thank you
@tusarmundhra5560
@tusarmundhra5560 10 месяцев назад
awesome
@learningcore289
@learningcore289 Год назад
Please complete this series
@md_shagaf_raiyan_rashid
@md_shagaf_raiyan_rashid 8 месяцев назад
Sir ye videos dekhna jaruri hai kiya sir mujhe nahi samjh a raha hai mai abhi 9th class me 14th years ka hu mujhe math pasand hai sir but mai abhi ye sub nahi samjh pa raha hu but mujhe ye deep learning padhna hai sir
@Shisuiii69
@Shisuiii69 7 месяцев назад
No brother maths compulsory hai ap linear algebra ki playlist dekhe yt se sb smjh a jae ga differential equation, calculas
@sachinkumar-wz1pj
@sachinkumar-wz1pj 10 месяцев назад
Great
@blindprogrammer
@blindprogrammer 2 года назад
Thanks!
@arslanahmed1311
@arslanahmed1311 Год назад
We can find best fitting line by perceptron loss function then if we put that lines equation in sigmoid it will become logistic regression. In this way we don't have to use log loss function. Is this correct approach????
@Aman_kumar0
@Aman_kumar0 Год назад
when we replace the stem activaion function with sigmoid then it will nearly become as logistic regression and for that we will use binary cross entropy which is as same as log loss function
@RaviSingh-bl6ls
@RaviSingh-bl6ls 8 месяцев назад
sir can i get these notes , please ... this would be appreciable
@ahmadtalhaansari4456
@ahmadtalhaansari4456 Год назад
Revising my concepts. July 30, 2023😅
@darshanayenkar
@darshanayenkar 2 года назад
perceptron is similar to logistic regression?
@amitattafe
@amitattafe 2 года назад
This code is quite interesting when I vary the learning rate between 0.1 to 0.01 like 0.05 or 0.07 and epochs I am increasing the solution get stuck somewhere at 1000 to 5000 epochs yet I am getting right solution at 0.05 with corresponding value of b = 1.25 a clear differentiation but still at 0.01 I wont get the answer. What is the possible reason for all this behaviour??
@proveerghosh9497
@proveerghosh9497 2 года назад
Can you also provide your hand written notes
@Rashidiiitd
@Rashidiiitd 11 месяцев назад
bhai khudh bhi kuch mehnat krle 😂😂
@ManasNandMohan
@ManasNandMohan 4 месяца назад
CFBR
@znyd.
@znyd. 5 месяцев назад
💜
@rutujadindore7043
@rutujadindore7043 Год назад
Wowowoowwoowow
@nabinadhikari5426
@nabinadhikari5426 Год назад
😍
@thefalcon1237
@thefalcon1237 Год назад
can i write y * f(xi) as y * y_hat ??
@ajitkumarpatel2048
@ajitkumarpatel2048 Год назад
🙏
@thenishantsapkota
@thenishantsapkota Год назад
31:32 I got shit scared lol
@InfoTunnel
@InfoTunnel Год назад
Pro
@KumR
@KumR 10 месяцев назад
30
@rutujadindore7043
@rutujadindore7043 Год назад
I am shaktiman ...hello all
@SouvikGhosh-re4nv
@SouvikGhosh-re4nv 9 месяцев назад
Hey, I think it would be W = W - (eta)* (DL/DW), not "+": ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-rIVLE3condE.html (Referring to Andrew NG)
@lokeshsharma4177
@lokeshsharma4177 6 месяцев назад
❤❤❤❤❤❤❤❤❤❤❤❤❤❤❤❤❤❤❤❤❤❤❤❤❤❤
@d-56mukulshitole67
@d-56mukulshitole67 5 месяцев назад
💥💥💥💥💥🗿🗿🗿🗿🗿
@rutujadindore7043
@rutujadindore7043 Год назад
Lets have lunch
@yashjain6372
@yashjain6372 Год назад
Thanks sir
Далее
Problem with Perceptron
7:39
Просмотров 39 тыс.
MLP Notation
13:24
Просмотров 48 тыс.
How might LLMs store facts | Chapter 7, Deep Learning
22:43
The moment we stopped understanding AI [AlexNet]
17:38
Has Generative AI Already Peaked? - Computerphile
12:48
The Most Important Algorithm in Machine Learning
40:08
Просмотров 448 тыс.