Тёмный

Live Day 2- Discussing Ridge, Lasso And Logistic Regression Machine Learning Algorithms 

Krish Naik
Подписаться 999 тыс.
Просмотров 168 тыс.
50% 1

Join the community session ineuron.ai/course/Mega-Community . Here All the materials will be uploaded.
Playlist: • Live Day 1- Introducti...
The Oneneuron Lifetime subscription has been extended.
In Oneneuron platform you will be able to get 100+ courses(Monthly atleast 20 courses will be added based on your demand)
Features of the course
1. You can raise any course demand.(Fulfilled within 45-60 days)
2. You can access innovation lab from ineuron.
3. You can use our incubation based on your ideas
4. Live session coming soon(Mostly till Feb)
Use Coupon code KRISH10 for addition 10% discount.
And Many More.....
Enroll Now
OneNeuron Link: one-neuron.ineuron.ai/
Direct call to our Team incase of any queries
8788503778
6260726925
9538303385
866003424

Опубликовано:

 

1 фев 2022

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 123   
@keenchkaat1543
@keenchkaat1543 2 года назад
Linear Ridge, Lasso And Logistic Regression: ------------------------------------------------------------------------- Part I: ------------------------------------------------------------------------- Agenda for the day: 1:47 Previous session recap: 6:03 Cost function: 6:25 7:47 Regression example: 7:20 Training data: 8:25 9:02 Overfitting: 9:13 10:30 Low bias and high variance: 11:45 19:17 Underfitting: 12:05 High bias and high variance: 13:45 19:30 Overfittting and underfitting scenarios: 18:20 Ridge and Lasso Regression situation: 22:00 22:30 Ridge Example: 25:38 29:50 Hyper parameters: 30:00 Lasso Regression: 32:44 36:00 (uses) Feature selection: 35:20 Cross validation: 37:00 Quick summary: 37:33 38:37 (ridge) 39:40 (lasso) 40:16 (purpose of lasso) Assumptions of Linear Regression: 46:30 ------------------------------------------------------------------------- Part II: ------------------------------------------------------------------------- Logistic Regression: 47:35 48:10 50:00(scenario) Why not Linear Regression? : 53:15 57:28 Squash: 59:00 Sigmoid function: 59:39 1:01:51 Assumptions: 1:02:44 Cost function: 1:09:38 1:15:00 1:16:15 1:19:20 Convex and Non-convex function: 1:10:45 Logistic regression algorithm: 1:22:00 Confusion Matrix: 1:29:50 Accuracy: 1:31:39 Imbalance dataset: 1:33:28 Precision and recall: 1:37:00 1:37:45 1:45:00 F score: 1:46:43 1:47:46(F 0.5 score) 1:48:38(F 2 score)
@narendratiwari4238
@narendratiwari4238 2 года назад
Thanks man
@anuragthakur5787
@anuragthakur5787 2 года назад
Thank you
@pankajkumarbarman765
@pankajkumarbarman765 2 года назад
Thanks man !
@keenchkaat1543
@keenchkaat1543 2 года назад
@@narendratiwari4238 welcome
@ayeshavlogsfun
@ayeshavlogsfun 2 года назад
Thanks
@SachinModi9
@SachinModi9 2 года назад
Super explanation of Ridge regression. Fundamentally its to prevent overfitting. Because cost is getting non zero. Algorithm tries to optimize the slope value. Ek teer do nishan Prevent overfit and slope is optimized due to new line
@aesthetic_muscle
@aesthetic_muscle 2 года назад
very comprehensive and amazing teaching sir. I can't thank you enough
@suriyaprakashkk9365
@suriyaprakashkk9365 2 года назад
ML 1 st session has 247K views.....But this 2 nd session has only 34K only. That is very bad. Peoples always loved to start anything. But after that they hate to continue those things. They didn't hold it. That's why peoples don't get that much of job offers and fail on interviews.
@shailuhbd
@shailuhbd 2 года назад
Well explained in simple way sir🙏
@kishansane8107
@kishansane8107 2 года назад
Thanks man ! god bless you
@anweshapal8339
@anweshapal8339 2 года назад
amazing lecture,, can you explain gzlm linkage function in details .. i feel talking abouyt range of y and mx+c after conversion will help
@sckrockz
@sckrockz 2 года назад
Pls make similar live videos or recorded videos in basics of time series forecasting explaining all the concepts.
@symbolstarnongbri3411
@symbolstarnongbri3411 3 месяца назад
Great work! Krish
@talkswithRishabh
@talkswithRishabh 2 года назад
awesome sir really i wanna say thanks for this information in crisp manner thanks so much
@prathameshpashte6881
@prathameshpashte6881 2 года назад
Thanks
@ashilshah3376
@ashilshah3376 11 месяцев назад
thank you so much, this detailed structured videos are very helpful.
@kreetibhardwaj5180
@kreetibhardwaj5180 Год назад
awesome session.. thank you
@ravinderbadishagandu2647
@ravinderbadishagandu2647 Год назад
thank you krish i am watching your ml algorithms again and again to make better
@shreedharchavan7033
@shreedharchavan7033 2 года назад
Excellent video
@gh504
@gh504 2 года назад
Thank you sir.
@Priyanka_KumariNov
@Priyanka_KumariNov 2 года назад
@krish naik gone through multiple sites , and observing underfitting is High bias and low variance .
@sumitkumar-jm7yj
@sumitkumar-jm7yj 2 года назад
sir, you are great.
@NeeRaja_Sweet_Home
@NeeRaja_Sweet_Home 2 года назад
Hi Krish, Is the below steps was correct for regression problem. 1. In linear Regression Model first we will do EDA, Feature Engineering, Data Pre-processing and will split data into Train and Test. 2. Creating model using Linear Regression and evaluate the model like finding Loss and R2 Square. 3. If we could see more Loss then we have to do optimization using gradient decent and stochastic gradient decent for minimizing the Loss 4. Finally we have to check Bias and Variance trade-off if model getting overfitting then use L1 regularisation for preventing overfitting and L2 regularisation for preventing overfitting and feature selection as well. Thanks,
@nasheeeed
@nasheeeed 2 года назад
L1 regularisation is the Lasso regression that performs feature selection, not L2.
@Coden69
@Coden69 2 года назад
Thanks man
@minhaoling3056
@minhaoling3056 2 года назад
I think after your 7 days series on ML , DL, EDA, time series, we can participate in kaggle competition. This would be the most efficient way to learn data science ! Hope you can do the series for DL and EDA too !
@ammar46
@ammar46 2 года назад
Normal distribution of features is not an assumption of Linear Regression. We want normal distribution to avoid overfitting by outliers.
@saurabhpatel5545
@saurabhpatel5545 11 месяцев назад
@@ammar46 most relevant comment to what @minhaoling3056 said
@ammar46
@ammar46 2 года назад
Normal distribution of features is not an assumption of Linear Regression. We want normal distribution to avoid overfitting by outliers.
@ramdasprajapati7884
@ramdasprajapati7884 Год назад
Lovely one..
@shaikshamshunnisha7867
@shaikshamshunnisha7867 Год назад
Superb explanation sir wonderful 😊
@abhijeet3514
@abhijeet3514 2 года назад
many thanks sir many thanks
@rohanshetty6347
@rohanshetty6347 Год назад
thank you, sir,
@sandipansarkar9211
@sandipansarkar9211 2 года назад
finished watching
@zaheerbeg4810
@zaheerbeg4810 Год назад
#Thanks Sir
@retenim28
@retenim28 2 года назад
When I read about Linear Regression, I always see mentioned Ordinary Least Square as the most used algorithm to find the thetas parameters. Why didn't Krish mention it? Is it not important? Can anyone explain?
@nikhili9559
@nikhili9559 2 года назад
now I need a pepto bismol after looking at the eqns
@sharemarket7840
@sharemarket7840 2 года назад
Great
@ayeshavlogsfun
@ayeshavlogsfun 2 года назад
Please Cover Coding along with tutorial
@piyushbaweja5484
@piyushbaweja5484 Год назад
@Krish Naik Sir i am not able to find this content uploaded in mega community course. Please let me know how can i get these slides.
@vagheeshmk3156
@vagheeshmk3156 8 месяцев назад
You are the Guru........🙏🙏🙏🙏🙏 #KingKrish
@rafibasha1840
@rafibasha1840 2 года назад
1:10:01 ,Do we get convex function because of cost function or Becuase of sigmoid
@VIVEK-ld3ey
@VIVEK-ld3ey 2 года назад
If we square the less significant coefficients then it would be much better as the square value would reduce it further then according to this particular scenario ridge is better right
@naveedarshad6209
@naveedarshad6209 4 месяца назад
00:27 The main topics of discussion are ridge and lasso regression, logistic regression, and the confusion matrix. 08:25 Overfitting and underfitting are two conditions that affect model accuracy. 22:28 L2 regularization adds a unique parameter or another sample value to minimize the cost function. 27:53 Ridge regularization is used to prevent overfitting by creating a generalized model. 39:15 Preventing overfitting and feature selection are the key purposes of ridge and lasso regression. 45:08 Logistic regression is a classification algorithm. 56:09 Logistic regression is used for binary classification problems with a decision boundary. 1:01:56 Logistic regression is used to create a sigmoid curve that helps in binary classification 1:13:03 Logistic regression cost function has specific equations for y=1 and y=0. 1:18:35 Logistic regression cost function and convergence algorithm 1:31:22 Calculation of basic accuracy and imbalanced data 1:37:06 The main aim of recall is to identify true positives. 1:48:48 F-score is calculated based on the value of beta Crafted by Merlin AI.
@EEBADUGANIVANJARIAKANKSH
@EEBADUGANIVANJARIAKANKSH 2 года назад
there was a small mistake in the explanation for lasso or L1 regression we are suppose to sum the mod of the slope not the mod of sum of slopes. both are different in video you wrote | theta0 + theta1 + theta2 + theta3 + theta4 + ... + theta_n | but in actual the L1 norm should be |theta0|+|theta1|+ |theta2| + |theta3| + ...+ |theta_n| hope u get my point Thank you
@SwarnaliMollickA
@SwarnaliMollickA 2 года назад
Thanks
@blankftw7388
@blankftw7388 Год назад
Thank you
@paneercheeseparatha
@paneercheeseparatha Год назад
Also there shouldn't be a 1/2 factor in logistic regression cost function. 1:22:35
@milanmishra309
@milanmishra309 7 месяцев назад
Low Bias, High Variance (Overfitting): When a model has low bias and high variance, it means that the model is able to fit the training data very well (low bias), but it is overly sensitive to the specific training examples and may not generalize well to new, unseen data (high variance). Overfitting is characterized by capturing noise or random fluctuations in the training data. To find an optimal model, there is a trade-off between bias and variance. The goal is to strike a balance that minimizes both bias and variance, leading to a model that generalizes well to new data. Techniques such as regularization and cross-validation are commonly used to address overfitting and find a suitable compromise between bias and variance.
@yogeshsapkal2593
@yogeshsapkal2593 2 года назад
Sir very very nice sir
@yashwanthsai9304
@yashwanthsai9304 11 месяцев назад
bro please explain in terms of vectors and getting solutions of this eqs in vector
@kalluriramakrishna5732
@kalluriramakrishna5732 Год назад
Sir, Underfitting means High Bias and Low variance
@kangkankalita5221
@kangkankalita5221 2 года назад
when high Bias and High variance then predictions will be inconsistent and not accurate, Low bias and Low variance is an Ideal Model always.. Low Bias High Variance: Over fitting High Bias Low Variance :Under fitting
@sttauras
@sttauras Год назад
High bias High Variance: Underfitting. If the model performs poorly on train data, how will it perform good on test data? Clearly the model will not be able to generalise well.
@shivanibala7708
@shivanibala7708 2 года назад
Can u post a video on cooks distance and leverage
@shrikantdeshmukh7951
@shrikantdeshmukh7951 2 года назад
There is big myth that normality assumption is for dependent feature But reality is Normality assumption is for residual (error) not for features Because if residual follow normal then its sum follow chisqure and then and then only ratio of msr/mse will follow f distribution
@debashiskundu_bcrec_it_6391
@debashiskundu_bcrec_it_6391 2 года назад
In logistic Regression , Our dependent feature may depend on multiple independent features at that time how can I deal with this???Thank you
@rafibasha4145
@rafibasha4145 2 года назад
Hi Krish ,please explain how slopes becomes 0 in case of Lasso
@Ajuppaan
@Ajuppaan 2 года назад
I have a doubt that he mentioned that lasso will do feature selection and ridge can't. The explanation he had given on that in ridge while squaring the slope it will increase but not in lasso... My doubt is if the feature is not important then its slope will be less than One. Then its square will again going to be so small...Its not going to increase... Then how slope ridge is not ineffective to feature selection...It should give more better result than lasso in that case...
@anshikakhandelwal_
@anshikakhandelwal_ 4 месяца назад
Does anybody have the materials for these live sessions? I tried to find them on the link that's provided but that isn't working.
@ajaykushwaha4233
@ajaykushwaha4233 2 года назад
Hi Krish, you have taught much better than Sudhansu.
@sot_adbu_mne2_pps_spring207
Please give an example of Lasso Regression
@esotericwanderer6473
@esotericwanderer6473 Год назад
Please don't confuse learners, model should follow normal distribution is wrong. It is "Residuals should have normal distrbution". In linear regression errors are assumed to follow normal a normal distribution with a mean of zero.
@sunilyadav3098
@sunilyadav3098 2 года назад
sir, notes are not available in given link. it seems invalid link. Please provide it for practice.
@raghuvarun9541
@raghuvarun9541 7 месяцев назад
Anyone Can you please post the Notes over here. I'm unable to open the link. As it got expired.
@rafibasha1840
@rafibasha1840 2 года назад
1:02 ,what is g(z) here Krish ,is it predicted variable y
@subhadeepjash3341
@subhadeepjash3341 Месяц назад
Under fitting means high bias and low variance. Please correct it
@catchursam
@catchursam 2 года назад
Great session! some1 please help. I am unable to download material
@shivanshmishra8395
@shivanshmishra8395 5 месяцев назад
Please give the link for the notebook
@shubhnema1189
@shubhnema1189 Год назад
anybody has notes of this course, would be very helpful if someone can share them, or tell where to access them.
@anirbanpatra3017
@anirbanpatra3017 Год назад
Plz Update the study materials.
@asurma44
@asurma44 2 года назад
Can I know about live projects when it is starting????
@chiragbhattad890
@chiragbhattad890 Год назад
Notes are not available on community
@jitendranarkhede3819
@jitendranarkhede3819 2 года назад
Sir where can i get this PDF.
@gunjangandhi4405
@gunjangandhi4405 2 года назад
Are these for freshers ....?
@anuradhabalasubramanian9845
Hi Krish, Are the materials available even now ? How do I download ?
@SachinKumar-cn4ps
@SachinKumar-cn4ps 11 месяцев назад
Have you downloaded the material/resources.
@Sajjad4739
@Sajjad4739 Год назад
Hy sir, my dataset containing 297 features and 9 types of prediction and results with Logistic regressions are low, why is it not a binary formate outcome so results are poor????
@user-yi7dr8ul2h
@user-yi7dr8ul2h 10 месяцев назад
i am unable to get the material
@bishnusharma9949
@bishnusharma9949 2 года назад
High bias and low variance : For Underfitting : 14:26 min
@its_udaysspecial6198
@its_udaysspecial6198 2 года назад
Hi Krish, I am not able to get into community forum to get this pdf file which you have written during the course. Are the documents removed from community forum.
@shubhnema1189
@shubhnema1189 Год назад
did you got the pdf, i too am unable to get it
@ashwinmanickam
@ashwinmanickam 2 года назад
41:52 Assumptions of LR
@dikshagupta3276
@dikshagupta3276 2 года назад
In spam classification why we use precision
@hiteshr8514
@hiteshr8514 Месяц назад
the notes link is not working
@starab6901
@starab6901 2 года назад
Where are these notes
@d-02-kanchigupta44
@d-02-kanchigupta44 6 месяцев назад
can someone share the PDF of this series
@dr.vishwadeepaksinghbaghel3500
linear regression
@tanmaychakraborty7818
@tanmaychakraborty7818 2 года назад
Please arrange a coding session for mL
@chiku18053
@chiku18053 2 года назад
Overfiting and underfiting use
@solo-ue4ii
@solo-ue4ii Год назад
just have a little doubt here :, 41:00 WHY WE DIDNT DIVIDE THE COST FUNCTION BY 2m?
@ultra_legend23
@ultra_legend23 2 года назад
Hi guys, asking this for a requirement I’m working on, how to reduce the false positives in my model? I’m getting 1700 positive predictions out of which the actual positives is 46. It would be great if someone help me. Thanks in advance!
@sanjeevtyagi501
@sanjeevtyagi501 2 года назад
Reduce the threshold or cutt off criteria for example, if probability is greater than .5 then y=1. Change it to .4 then .3. This will reduce your FP's but these will be rearranged somewhere mostly into FN's.
@magicharshil1730
@magicharshil1730 Год назад
I complete my boards Can i join is it relevant to me?
@sidindian1982
@sidindian1982 Год назад
14:32 Correction.. Underfitting occurs if the model or algorithm shows low variance but high bias (to contrast the opposite, overfitting from high variance and low bias). I
@sttauras
@sttauras Год назад
If the model has high bias, how will it have low variance?
@shrikantdeshmukh7951
@shrikantdeshmukh7951 2 года назад
Assumption of linear regression Linearity normality of error Independence of error No autocorrelation Homoscedasticity residual variance equal and mean of residual equal to 0
@ammar46
@ammar46 2 года назад
True, Normal distribution of features is not an assumption of Linear Regression. We want normal distribution to avoid overfitting by outliers.
@palvinderbhatia3941
@palvinderbhatia3941 2 года назад
Overfitting: Good performance on the training data, poor generliazation to other data (low bias but high variance). Underfitting: Poor performance on the training data and poor generalization to other data ( high bias and high variance).
@annyd3406
@annyd3406 Год назад
Most important part 1:29:00
@SamBuchl
@SamBuchl 7 месяцев назад
Just published by @Krish Naik, new video describing Lasso and ElasticNet: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-qbJKrlOxlJA.html - with helpful numerical examples of how feature selection works in Lasso.
@kruan2661
@kruan2661 Год назад
I see no reason the (h0(x) - y)^2 for logistic regression is non-convex. 🧐
@gig3193
@gig3193 5 дней назад
1:41:51 / 1:52:40
@chiku18053
@chiku18053 2 года назад
Sir hindi main bhi bata sakte ho kya
@krishnaik06
@krishnaik06 2 года назад
Already uploaded in Krish Hindi channel
@MobiLights
@MobiLights 19 часов назад
Sir, please update the phone numbers and the links in the description
@tom-shellby
@tom-shellby 2 года назад
Sir, if Logistic Regression is Classification problem then why it is called logistic regression and not logistic classification ???
@rupalacharyya4606
@rupalacharyya4606 2 года назад
Bcoz eventually it's predicting the probability of the dependent variable for a particular class, and hence the output is a continuous variable. Thus it's called Logistic regression
@ashabhumza3394
@ashabhumza3394 2 года назад
@@rupalacharyya4606 thanks, I also had the same confusion....but now it's clear with your explanation 👍
@data_pathavan4585
@data_pathavan4585 2 года назад
I dont understand how underfitting = High bias and High variance
@data_pathavan4585
@data_pathavan4585 2 года назад
Please someone give me link to read about it
@pritampatra6077
@pritampatra6077 2 года назад
Underfitting - high bias Overfitting - high Variance
@kunjjani1683
@kunjjani1683 2 года назад
Bias relates to training data accuracy and Variance relates to testing data accuracy so when we get low accuracy on training data we get High Bias means the data is not fitted correctly similarly when we get low accuracy on testing data we get high variance which means the prediction is not accurate Hope the explanation helps..
@ayushgupta2537
@ayushgupta2537 2 года назад
Please teach on white screen.
@TheGuts09
@TheGuts09 3 месяца назад
why you are making most of the videos as members only content which were free before. is it a greed for money now?
@hafizhhasyhari
@hafizhhasyhari Год назад
Thanks
Далее
UNO!
00:18
Просмотров 1,1 млн
The Sigmoid Function Clearly Explained
6:57
Просмотров 100 тыс.
All Learning Algorithms Explained in 14 Minutes
14:10
Просмотров 206 тыс.
Support Vector Machines: All you need to know!
14:58
Просмотров 139 тыс.
Logistic Regression - THE MATH YOU SHOULD KNOW!
9:14
Просмотров 147 тыс.