Тёмный

L1 and L2 Regularization in Machine Learning: Easy Explanation for Data Science Interviews 

Emma Ding
Подписаться 56 тыс.
Просмотров 7 тыс.
50% 1

Regularization is a machine learning technique that introduces a regularization term to the loss function of a model in order to improve the generalization of a model. In this video, I explain both L1 and L2 regularizations, the main differences between the two methods, and leave you with helpful pros and cons so you can best decide when to implement each function.
🟢Get all my free data science interview resources
www.emmading.com/resources
🟡 Product Case Interview Cheatsheet www.emmading.com/product-case...
🟠 Statistics Interview Cheatsheet www.emmading.com/statistics-i...
🟣 Behavioral Interview Cheatsheet www.emmading.com/behavioral-i...
🔵 Data Science Resume Checklist www.emmading.com/data-science...
✅ We work with Experienced Data Scientists to help them land their next dream jobs. Apply now: www.emmading.com/coaching
// Comment
Got any questions? Something to add?
Write a comment below to chat.
// Let's connect on LinkedIn:
/ emmading001
====================
Contents of this video:
====================
00:00 Introduction
00:21 Interview Questions
00:41 What is regularization?
01:27 When to use regularization?
01:47 Regularization techniques
03:44 L1 and L2 regularizations
03:55 L1 Regularization
08:03 L2 Regularization
10:50 L1 vs. L2 Regularization
11:47 Outro

Опубликовано:

 

3 авг 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 11   
@AllieZhao
@AllieZhao Год назад
Much clearer than what I learned from elsewhere. I also noticed that you slowed the pace of speaking which is nice for people to follow
@MinhNguyen-lz1pg
@MinhNguyen-lz1pg Год назад
Man, I tried to find videos and blog post about this topic and most of them just scratch the surface. Thanks for the deep analysis and comparison!
@emma_ding
@emma_ding Год назад
So glad you found it helpful, Minh! Thanks for watching. 😊
@dimasushko9023
@dimasushko9023 4 месяца назад
Some details who'd like to to get into that really deep: regarding L1 penalty - we can't really choose w2 = 0 and w1 = 1, since loss consists of 2 parts: loss from the diamond shape (l1 penalty) + loss from the ellips shape (initial loss function). since for (w1, w2) pairs (0, 1) and (1, 0) l1-penalty term values are the same and equals 1 + 0 = 0 + 1 = 1, we now look at the initial loss function as it just as well depends on the (w1, w2) values. for w1 = 0 and w2 = 1 (closer to the center point) loss is going to be less than for w1 = 1 and w2 = 0 (farther from the center point) - we can see that from the contour lines plot. therefore, optimizer won't go there and will converge on w1 = 0 and w2 = 1 and that's it.
@HumbertoMoura
@HumbertoMoura Год назад
Great explanation, Emma! Have a nice day!
@louisforlibertarian
@louisforlibertarian Год назад
Great vid! It's like a fast recap of a college stat class.
@crystalcleargirl07
@crystalcleargirl07 7 месяцев назад
thank you so much for this clear explanation. It has helped me more than the Coursera course.
@muhannedalogaidi7877
@muhannedalogaidi7877 11 месяцев назад
Hello Emma, I started my switching to AI/ML and noticed your website and courses. does your training or courses suitable for beginner? . also am not sure if you you have special coursers for statistics and mathematics for AI/ML .. Thank you
@cosmicfluke3718
@cosmicfluke3718 Год назад
How can increasing alpha would decrease the weight can you pls explain. Now 0.1 is bigger than 0.001 if I have weight as 0.4 now 0.1 *0.4= 0.04 where as 0.001 *0.4 would be 0.0004 now lesser the alpha lesser the weight which will be near to zero correct. I feel what you mean by bigger alpha is alpha with bigger negative power isn't that correct. Can you please clarify
@davidskarbrevik
@davidskarbrevik Год назад
"increasing the alpha would decrease the weight" does not refer to the calculation of alpha * weights, it refers to what happens when you minimize your regularized loss function.
@emma_ding
@emma_ding Год назад
Many of you have asked me to share my presentation notes, and now… I have them for you! Download all the PDFs of my Notion pages at www.emmading.com/get-all-my-free-resources. Enjoy!
Далее
ТЫ С ДРУГОМ В ДЕТСТВЕ😂#shorts
01:00
Regularization - Explained!
12:44
Просмотров 14 тыс.
Sparsity and the L1 Norm
10:59
Просмотров 47 тыс.
What is Permutation Testing?
15:53
Просмотров 90
Standardization vs Normalization Clearly Explained!
5:48