First of its awesome classic lecture.. We think that its just 1 hours video so why sir not upload 2 3 video in a day... But baddies its spoon feeding video... To made 1 hour video sir gave his almost 10 to 12 hours so we can easily understand each and every concept... Thank u so much sir for create this leacture series..... And one humble request to every viewer if this video improve your knowledge from a to b then pls pls pls like and share all the video....if u r generous..
1:30-->10:35 Perceptron trick summary 21:45-->24:30 Perceptron Loss Function 37:35-->38:55 Geometric intuition of Perceptron's Loss function 39:10-->44:25 using Gradient descent 55:15-->58:20 Perceptron is flexible
whenever you will be free, please try to upload videos on consecutive days rather than the 1-day gap. Because we can't wait for soo much😁 to see such amazing content.
sir completed your NLP series yes it was awesome and now completed deep learning videos from now ill be up to date, and thank you so much for this series
There is a mistake in the weight update rule.Originally it was correct. Instead of w1=w1 + n*(del L/del w1) it should be w1=w1 - n(del L/del w1)...similarly for w2 and b.. in the program the formula is correct as we use the actual value (-yx)...so the formula becomes positive in the program...But in the mathematical formula, it should be rectified to - instead of +...Please look into it.Thank you.
For those who think that weight/bias formula is wrong: The typical weight update rule for gradient descent involves subtracting the gradient of the loss function with respect to the weight. However, in the perceptron algorithm, the weight update isn't derived from the gradient of a loss function like in other learning algorithms (e.g., logistic regression). Instead, it's a direct rule based on misclassified points. The update happens only when a misclassification occurs (z * y[i] < 0).
Srikanth varma sir from applied ai course had the same flair in teaching. I always wanted to revise the concepts but wasnt able to since the course didnt allow download. Now i dont have to search for it. Thanks agaim brother beautiful way of teaching
@Abhisheksaurav - Same thing happened with me today. The timing, the headphones, literally everything. I was thinking of commenting this but didn't. Saw your comment and it made me chuckle. XD
easy way loss function 😮 Imagine you're learning to play basketball, and your goal is to make as many baskets as possible. Each time you shoot the ball, you either make the basket (success) or miss it (failure). Now, think of the loss function as a way to measure how far you are from your goal.
sir stat quest ki trha aap jo bhi videos ka reference dete ho jo 100 days of ML mai hai ,to please purani videos ke link bhi daal diya karo taki easy to navigate ho sake
00:03 The video discusses the perceptron loss function and the problems with the perceptron trick. 02:44 Perceptron uses a simple algorithm for binary classification 07:48 Perceptron trick may not always result in a straight line 10:20 Understanding the concept of Loss Function in machine learning 16:05 The Perceptron Loss Function and its drawbacks 18:36 Perceptron loss function uses raw data for classification. 23:52 Explanation of perceptron loss function and hinge loss 26:30 Minimizing loss function using gradient descent 31:25 Explaining the geometric intuition of the loss function 33:44 Understanding the impact of different points on the launch function 38:34 Understanding the optimization algorithm for finding minimum values 40:54 Learning update rules for parameters using derivatives 46:07 Understanding the mathematical model of Perceptron 48:57 Logistic Regression and Perceptron are the same with different activation functions 54:50 Understanding different activation functions and loss functions 57:40 Perceptron is flexible for regression and classification.
Hi sir first upon thanks for your videos and your effort. Sir i request whare ver you write on screen please upload it in PDF form thats why students get your teaching notes for study.
1.Why do we need to add a bias 'b' to the line equation ? 2.Also, are we using this sklearn Loss function over the loss function which calculate distance because we don't want the contribution of rightly predicted points in loss ?
perceptron ka code galat hai bhaiya.. output 1 and 0 aa raha hai.. par woh 1 and -1 ana chaihai tha .. aap ne jo code likha hai for loop ke aandar if condition y(i)=0 ke liye check hi nahi hoga... isiliye sab 0 ke -1 kijiye first.
at 44:47 , I ponder that you have mistaken ,because you have changed the sign and that is not true sir and one more thing ,In the code you used + sign because of it's derivative sign not for the gradient descent update rules ,please correct me if I'm wrong
sparse categorical is used when you have multiclass target variable and categorical crossentropy is also used when you have multi calss target variable but the thing is you need to use get dummies on your target variable thn you can use categorical crossentropy
Nitish sir image is preprocessing k foran bad hum images ko spilt kr skty hain train and test me aur us k bd data generator k through augmentations? JazakhAllah. Respect from pakistan
Dear sir, I run the code with learning rate at 0.01, the value of b is stuck at 1.23 while if learning rate is increased by 10 fold that is your default (0.1) the value of b = 1.3000. So what might be the reason for this so at lower value of learning rate the solution is not converging while for higher learning rate we get faster convergence.
Sir ye videos dekhna jaruri hai kiya sir mujhe nahi samjh a raha hai mai abhi 9th class me 14th years ka hu mujhe math pasand hai sir but mai abhi ye sub nahi samjh pa raha hu but mujhe ye deep learning padhna hai sir
We can find best fitting line by perceptron loss function then if we put that lines equation in sigmoid it will become logistic regression. In this way we don't have to use log loss function. Is this correct approach????
when we replace the stem activaion function with sigmoid then it will nearly become as logistic regression and for that we will use binary cross entropy which is as same as log loss function
This code is quite interesting when I vary the learning rate between 0.1 to 0.01 like 0.05 or 0.07 and epochs I am increasing the solution get stuck somewhere at 1000 to 5000 epochs yet I am getting right solution at 0.05 with corresponding value of b = 1.25 a clear differentiation but still at 0.01 I wont get the answer. What is the possible reason for all this behaviour??