Тёмный

Who's Adam and What's He Optimizing? | Deep Dive into Optimizers for Machine Learning! 

Sourish Kundu
Подписаться 2,5 тыс.
Просмотров 42 тыс.
50% 1

Welcome to our deep dive into the world of optimizers! In this video, we'll explore the crucial role that optimizers play in machine learning and deep learning. From Stochastic Gradient Descent to Adam, we cover the most popular algorithms, how they work, and when to use them.
🔍 What You'll Learn:
Basics of Optimization - Understand the fundamentals of how optimizers work to minimize loss functions
Gradient Descent Explained - Dive deep into the most foundational optimizer and its variants like SGD, Momentum, and Nesterov Accelerated Gradient
Advanced Optimizers - Get to grips with Adam, RMSprop, and AdaGrad, learning how they differ and their advantages
Intuitive Math - Unveil the equations for each optimizer and learn how it stands out from the others
Real World Benchmarks - See real world experiments from papers in domains ranging from computer vision to reinforcement learning to see how these optimizers fare against each other
🔗 Extra Resources:
3Blue1Brown - • Neural networks
Artem Kirsanov - • The Most Important Alg...
📌 Timestamps:
0:00 - Introduction
1:17 - Review of Gradient Descent
5:37 - SGD w/ Momentum
9:26 - Nesterov Accelerated Gradient
10:55 - Root Mean Squared Propagation
13:59 - Adaptive Gradients (AdaGrad)
14:47 - Adam
18:12 - Benchmarks
22:01 - Final Thoughts
Stay tuned and happy learning!

Опубликовано:

 

6 июн 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 191   
@akshaynaik4197
@akshaynaik4197 Месяц назад
The Adam Optimizer is a very complex topic that you introduced and explained in a very well manner and in a surprisingly short video! I'm impressed Sourish! Definetly one of my favorite videos from you!
@sourishk07
@sourishk07 Месяц назад
Thank you so much Akshay! I'm glad you enjoyed it!
@elirane85
@elirane85 29 дней назад
I don't disagree that it's a very good video. But calling something that can be taught in a 20 minute youtube video "very complex topic" is funny. When I was in collage studying CS (before Adam even existed, I am this old), the entire topic of AI and neural networks was covered in 1 semester with only 2 hours of class per week. In fact, that is what both surprising and amazing about the current state of AI, that the math behind it is so simple that most of the researchers were positive that we would need much more complex algorithms to get to where we are now. But then teams like OpenAI proved that it was just a matter of massively scaling up those simple concepts and feeding it insane amount of data.
@sourishk07
@sourishk07 28 дней назад
@@elirane85 Hi! Thanks for commenting. I believe Akshay was just being nice haha. But you're definitely right about how the potential lies not in complex algorithms, but the scale at which these algorithms are run! I guess that's the marvel of modern hardware
@AbhishekVerma-kj9hd
@AbhishekVerma-kj9hd 23 дня назад
I remembered when my teacher gave me assignment on optimizers I have gone through blogs, papers and videos but everywhere I see different formulas I was so confused but you explained everything at one place very easily.
@sourishk07
@sourishk07 22 дня назад
I'm really glad I was able to help!
@raze0ver
@raze0ver Месяц назад
love the simplified explanation and animation! videos with this quality and educational value are worth of millions of likes and subscribers in other channels... this is so underrated..
@sourishk07
@sourishk07 Месяц назад
Haha I really appreciate the kind words! More content like this is on the horizon
@theardentone791
@theardentone791 17 дней назад
Absolutely loved the graphics and intensive paper based proof of working of different optimizers , all in the same video. You just earned a loyal viewer.
@sourishk07
@sourishk07 17 дней назад
Thank you so much! I'm honored to hear that!
@aadilzikre
@aadilzikre 17 дней назад
Very Clear Explanation! Thank you. I especially appreciate the fact that you included the equations.
@sourishk07
@sourishk07 14 дней назад
Thank you! And I’m glad you enjoyed it
@user-nv3fy6bd4p
@user-nv3fy6bd4p Месяц назад
Sir your exposition is excellent, the presentation, the cadence , the simplicity.
@sourishk07
@sourishk07 Месяц назад
I really appreciate that! Looking forward to sharing more content like this
@TEGEKEN
@TEGEKEN 29 дней назад
Nice animations, nice explanations of the math mehind them, i was curious about how different optimizers work but didnt want to spend an hour going through documentations, this video answered most of my questions! One that remains is about the AdamW optimizer, i read that it is practically just a better version of Adam, but didnt really find any intuitive explanations of how it affects training (ideally with graphics like these hahaha). There are not many videos on youtube about it
@sourishk07
@sourishk07 28 дней назад
I'm glad I was able to be of help! I hope to make a part 2 where I cover more optimizers such as AdamW! Stay tuned!
@wut3v3r77
@wut3v3r77 Месяц назад
Wow! Great video, more of these deep dives into basic components of ML please
@sourishk07
@sourishk07 Месяц назад
Thank you for watching. We have many more topics lined up!
@jonas4100
@jonas4100 Месяц назад
Incredible video. I especially love the math and intuition behind it that you explain. Keep it up!
@sourishk07
@sourishk07 Месяц назад
Thanks, will do! Don't worry, there is more to come
@Param3021
@Param3021 2 дня назад
This video is amazing! You covered most important topic in ML, with all major optimization algorithms. I literally had no idea about Momentum, NAG, RMSprop, AdaGrad, Adam. Now, I have a good overview of all, will deep dive in all of them. Thanks for the video! ❤
@sourishk07
@sourishk07 2 дня назад
I'm really glad to hear that it was helpful! Good luck on your deep dive!
@alexraymond999
@alexraymond999 Месяц назад
Excellent video, please keep it up! Subscribed and will share with my colleagues too :)
@sourishk07
@sourishk07 Месяц назад
I really appreciate it! Excited to share more content
@MD-zd3du
@MD-zd3du 11 дней назад
Thanks for the great explanations! The graphics and benchmark were particularly useful.
@sourishk07
@sourishk07 3 дня назад
I'm really glad to hear that!
@tannergilliland6105
@tannergilliland6105 Месяц назад
I am coding backpropagation right now and this helped me so much.
@sourishk07
@sourishk07 Месяц назад
Glad to hear that! That's a very exciting project and I wish you luck on it!
@joaoguerreiro9403
@joaoguerreiro9403 22 дня назад
Just found out your channel. Instant follow 🙏🏼 Hope we can see more Computer Science content like this. Thank you ;)
@sourishk07
@sourishk07 22 дня назад
Thank you so much for watching! Don't worry, I have many more videos like this planned! Stay tuned :)
@AndBar283
@AndBar283 День назад
Thank you for such easy, simple, and great explanation. I searched quick overwiev how Adam is working and found your video. Actually I am training DRL Reinforce Policy Gradient algorithm with theta parameters as weights and viases from CNN, where exactly Adam is involved. Thanks again, very informative.
@jcorey333
@jcorey333 Месяц назад
This was a really interesting video! I feel like this helped me understand the intuitions behind optimizers, thank you!
@sourishk07
@sourishk07 Месяц назад
I really appreciate the comment! Glad you could learn something new!
@bernard2735
@bernard2735 Месяц назад
Thank you for a very clear explanation. Liked and subscribed
@sourishk07
@sourishk07 Месяц назад
Thanks for the sub! I'm glad you enjoyed the video
@MalTramp
@MalTramp 26 дней назад
Nice video :) I appreciate the visual examples of the various optimizers.
@sourishk07
@sourishk07 26 дней назад
Glad to hear that!
@Bolidoo
@Bolidoo 2 дня назад
Woah what a great video! And how you are helping people on the comments kind of has me amazed. Thank you for your work!
@sourishk07
@sourishk07 2 дня назад
Haha thank you, I really appreciate that!
@signisaer1705
@signisaer1705 Месяц назад
I never comment on anything, but wanted to let you know that this video was really well done. Looking forward to more!
@sourishk07
@sourishk07 Месяц назад
Thank you, I really appreciate it!
@ShadowD2C
@ShadowD2C 21 день назад
what a good video, I watched it and bookmarked so I can come back to it when I understand more about the topic
@sourishk07
@sourishk07 20 дней назад
Glad it was helpful! What concepts do you feel like you don’t understand yet?
@MaltheHave
@MaltheHave Месяц назад
So cool, just subscribed! I literally just started researching more about how optimizers work this week as part of my bachelor's thesis. Once I'm finished with my thesis I would love to see if I can create my own optimizer algorithm. Thanks for sharing! Do you happen to have the manimgl code you used to create the animations for visualizing the gradient path of the optimizers?
@sourishk07
@sourishk07 Месяц назад
Thank you for subscribing! Maybe once you make your own optimizer, I can make a video on it for you! I do have the manimgl code but it's so messy haha. I do plan on publishing all of the code for my animations once I get a chance to clean up the codebase. However, if you want the equations for the loss functions in the meantime, let me know!
@EvolHeartriseAI-qn5oi
@EvolHeartriseAI-qn5oi Месяц назад
Great video, glad the algorithm brought me. The visualizations helped a lot
@sourishk07
@sourishk07 Месяц назад
Thank you so much! I'm glad you liked the visualizations! I had a great time working on them
@theophilelouison7249
@theophilelouison7249 Месяц назад
That is some quality work sir!
@sourishk07
@sourishk07 Месяц назад
I really appreciate that! Don't worry, we got more to come
@orellavie6233
@orellavie6233 25 дней назад
Nice vid, I'd mention MAS too, to explicity say that Adam at the start is weaker and could fit local minima(until it gets enough data) and SGD peforms well with its stochasity, and then slower, so both methods (peformed nearly like I mentioned in MAS Paper)
@sourishk07
@sourishk07 24 дня назад
Thank you for the feedback! These are great things to include in a part 2!
@jeremiahvandagrift5401
@jeremiahvandagrift5401 20 дней назад
Very nicely explained. Wish you brought up the relationship between these optimizers and numerical procedures though. Like how vanilla gradient descent is just Euler's method applied to a gradient rather than one derivative.
@sourishk07
@sourishk07 20 дней назад
Thank you so much. And there were so many topics I wanted to cram into this video but couldn't in the interest of time. That is a very interesting topic to cover and I'll add it to my list! Hopefully we can visit it soon :) I appreciate the idea
@norman9174
@norman9174 Месяц назад
best video on optimizers thanks
@sourishk07
@sourishk07 Месяц назад
Glad you think so!
@JoseDiaz-sr4co
@JoseDiaz-sr4co Месяц назад
Very well explained and awesome animations. Hope to see more content in the future!
@sourishk07
@sourishk07 Месяц назад
Don't worry, I have many more videos like this lined up!
@nark4837
@nark4837 Месяц назад
You are incredibly intelligent to explain such a complex topic formed of tens of research papers of knowledge in a single 20 minutes video... what the heck!
@sourishk07
@sourishk07 Месяц назад
Wow thank you for those kind words! I'm glad you enjoyed the video!
@punk3900
@punk3900 25 дней назад
I love it! You are also nice to hear and see! :D
@sourishk07
@sourishk07 25 дней назад
Haha thank you very much!
@MrLazini
@MrLazini 13 дней назад
very clearly explained - thanks
@sourishk07
@sourishk07 3 дня назад
Glad you liked it
@kevinlin3998
@kevinlin3998 Месяц назад
The animations are amazing - what did you use to make them??
@sourishk07
@sourishk07 Месяц назад
Thank you so much! I'm glad you liked them. I used the manimgl python library!
@jawgboi9210
@jawgboi9210 Месяц назад
I'm not even a data scientist or machine learning expert but I enjoyed this video!
@sourishk07
@sourishk07 Месяц назад
I love to hear that!
@Alice_Fumo
@Alice_Fumo 18 дней назад
I used to have networks where the loss was fluctuating in a very periodic manner every 30 or so steps and I never knew why that happened. Now it makes sense! It just takes a number of steps for the direction of Adam weight updates to change. I really should have looked this up earlier.
@sourishk07
@sourishk07 17 дней назад
Hmm while this might be Adam's fault, I would encourage you to see if you can replicate the issue with SGD w/ Momentum or see if another optimizer without momentum solves it. I believe there are a wide array of reasons as to why this periodic behavior might emerge.
@sayanbhattacharya3233
@sayanbhattacharya3233 Месяц назад
Good work man. Which tool do you use for making the animations?
@sourishk07
@sourishk07 Месяц назад
Thank you! I used manimgl!
@simonstrandgaard5503
@simonstrandgaard5503 Месяц назад
Great explanations
@sourishk07
@sourishk07 Месяц назад
Glad you think so!
@EobardUchihaThawne
@EobardUchihaThawne Месяц назад
Adam - A dynamic adjustment mechanism
@sourishk07
@sourishk07 Месяц назад
Yes, that's exactly what it is!
@sohamkundu9685
@sohamkundu9685 Месяц назад
Great video!
@sourishk07
@sourishk07 Месяц назад
Thanks!
@VincentKun
@VincentKun 15 дней назад
This video is super helpful my god thank you
@sourishk07
@sourishk07 14 дней назад
I’m really glad you think so! Thanks
@Mutual_Information
@Mutual_Information 19 дней назад
Great video dude!
@sourishk07
@sourishk07 19 дней назад
Thanks so much! I've seen your videos before! I really liked your videos about Policy Gradients methods & Importance Sampling!!!
@Mutual_Information
@Mutual_Information 19 дней назад
@@sourishk07 thanks! There was some hard work behind them, so I’m happy to hear they’re appreciated. But I don’t need to tell you that. This video is a master piece!
@sourishk07
@sourishk07 19 дней назад
I really appreciate that coming from you!!
@toxoreed4313
@toxoreed4313 23 часа назад
bhalo video TY
@spencerfunk6697
@spencerfunk6697 29 дней назад
dude love the video title. came just to comment that. i think i searched something like "who is adam w" when i started my ai journey
@sourishk07
@sourishk07 28 дней назад
Haha I'm glad you liked the title. Don't worry I did that too!
@MrmmmM
@MrmmmM Месяц назад
Hey what do you think, could we use reinforcement learning to train the perfect optimizer?
@sourishk07
@sourishk07 Месяц назад
Yeah, as crazy as it sounds, there is already research being done in this area! I encourage you to take a look at some of those papers if you're interested! 1. Learning to Learn by Gradient Descent by Gradient Descent (2016, Andrychowicz et al.) 2. Learning to Optimize (2017, Li and Malik)
@ChristProg
@ChristProg 25 дней назад
Thank you So much sir. But I will like you to create videos on upconvolutions or transposed convolutions. Thank you for understanding
@sourishk07
@sourishk07 25 дней назад
Hi! Thank you for the great video ideas. I'll definitely add those to my list!
@hotmole7621
@hotmole7621 Месяц назад
well done!
@sourishk07
@sourishk07 Месяц назад
Thank you!
@Basant5911
@Basant5911 Месяц назад
share about batch sgd on pre-training of llm. What were results.
@sourishk07
@sourishk07 Месяц назад
Hi! I haven't performed any pre-training for LLMs yet, but that's a good idea for a future video. I'll definitely add it to my list!
@rehanbhatti5843
@rehanbhatti5843 Месяц назад
Thank you
@sourishk07
@sourishk07 Месяц назад
You're welcome. Thanks for watching!
@jevandezande
@jevandezande Месяц назад
What is the mathematical expression for the boss cost function at the end?
@sourishk07
@sourishk07 Месяц назад
Haha it took me a long time to "engineer" the cost function to look exactly how it did! It consists of three parts: the parabola shape and two holes. They're added together to yield the final result. I've inserted the python code below, but it might seem overwhelming! If you're really curious, I encourage you to change each of the constants and see how the function changes. ```python w1 = 2 w2 = 4 h1 = 0.5 h2 = 0.75 c = 0.075 bowl_constant = 3**2 center_1_x, center_1_y = -0, -0.0 center_2_x, center_2_y = 1, 1 def f(x, y): parabola = c * x**2 + c * y**2 / bowl_constant hole1 = -h1 * (np.exp(-w1*(x-center_1_x)**2) * np.exp(-w1*(y-center_1_y)**2)) hole2 = -h2 * (np.exp(-w2*(x-center_2_x)**2) * np.exp(-w2*(y-center_2_y)**2)) return parabola + hole1 + hole2 ```
@jevandezande
@jevandezande Месяц назад
@@sourishk07 This is really great! I work in molecular QM and needed an image to display a potential energy surface for a reaction and the transition between reactants and products. This is one of the cleanest analytical ones that I've seen, and I'll be using this in the future, thanks!
@sourishk07
@sourishk07 Месяц назад
@@jevandezande I'm glad I was able to be of assistance! Let me know if you need anything else!
@asmithgames5926
@asmithgames5926 11 дней назад
I tried using momentum for a 3SAT optimizer token i worked on in 2010. Doesn't help with 3SAT since all variables are binary. It's cool that it works with NNs though!
@sourishk07
@sourishk07 3 дня назад
Oh wow that's an interesting experiment to run! Glad you decided to try it out
@byzantagaming648
@byzantagaming648 16 дней назад
I am the only one to not understand the RMS propagation math formula? What is the gradient squared is it per component or is the Hessian? How do you divide a vector by another vector? Could someane explain me please.
@sourishk07
@sourishk07 14 дней назад
Hi! Sorry, this is something I should've definitely clarified in the video! I've gotten a couple other comments about this as well. Everything in the formula is component-wise. You square each element in the gradient matrix individually & you perform component-wise division, along with the component-wise square root. Again, I really apologize for the confusion! I'll make sure to make these things clearer next time.
@benhurwitz1617
@benhurwitz1617 Месяц назад
This is sick
@sourishk07
@sourishk07 Месяц назад
Thank you Ben!
@loose-leif
@loose-leif Месяц назад
Fantastic
@sourishk07
@sourishk07 Месяц назад
Thank you! Cheers!
@kigas24
@kigas24 26 дней назад
Best ML video title I think I've ever seen haha
@sourishk07
@sourishk07 25 дней назад
LOL thank you so much!
@AndreiChegurovRobotics
@AndreiChegurovRobotics 25 дней назад
Great, great, great!!!
@sourishk07
@sourishk07 24 дня назад
Thanks!!!
@reinerwilhelms-tricarico344
@reinerwilhelms-tricarico344 Месяц назад
When you write square root of V_t in a denominator, do you mean this component-wise? V is a high dimensional vector I assume. Also what if it has negative values? Don’t you mean the norm of V?
@sourishk07
@sourishk07 Месяц назад
That's a good point. I did mean component-wise, which I should've mentioned in the video. Also, V shouldn't have negative values because we're always squaring the gradients when calculating the V term. Since beta is always between 0 and 1, we're always multiplying positive numbers to calculate V.
@vladyslavkorenyak872
@vladyslavkorenyak872 17 дней назад
I wonder if we could use the same training loop NVIDIA used in the DrEureka paper to find even better optimizers.
@sourishk07
@sourishk07 14 дней назад
Hi! Using reinforcement learning in the realm of optimizers is a fascinating concept and there's already research being done on it! Here are a couple cool papers that might be worth your time: 1. Learning to Learn by Gradient Descent by Gradient Descent (2016, Andrychowicz et al.) 2. Learning to Optimize (2017, Li and Malik) It would be fascinating to see GPT-4 help write more efficient optimizers though. LLMs helping accelerate the training process for other AI models seems like the gateway into AGI
@vladyslavkorenyak872
@vladyslavkorenyak872 14 дней назад
@@sourishk07 Thanks for the answer!
@LouisChiaki
@LouisChiaki Месяц назад
The notation for the gradient is a bit weird but nice video!
@sourishk07
@sourishk07 20 дней назад
Sorry haha. I wanted to keep it as simple as possible, but maybe I didn't do such a good job at that! Will keep in mind for next time
@mahdipourmirzaei1048
@mahdipourmirzaei1048 Месяц назад
What about the Adabelief optimizer? I use it most of the time and it is a bit faster and needs less tuning than the Adam optimizer.
@sourishk07
@sourishk07 Месяц назад
Hi! I've read the Adabelief paper and it seems really promising, but I wanted to focus on the preliminary optimizers first. I think this might be a great candidate if I were to work on a part 2 to this video! Thanks for the idea!
@lando7528
@lando7528 Месяц назад
Yooo Sourish this is heat do you remember hs speech
@sourishk07
@sourishk07 Месяц назад
Thanks Lan! Yeah I remember high school speech! It's crazy to reconnect on RU-vid lol
@bobinwobbin4094
@bobinwobbin4094 18 дней назад
The “problem” the Adam algorithm in this case is presented to solve (the one with local and global minima) is simply wrong - in small amounts of dimensions this is infact a problem, but the condition for the existence of a local minima grows more and more strongly with the amount of dimensions. So in practice, when you have millions of parameters and therefore dimensions, local minima that aren’t the global minima will simply not even exist, the probability for such existence is simply unfathomably small.
@sourishk07
@sourishk07 17 дней назад
Hi! This is a fascinating point you bring up. I did say at the beginning that the scope of optimizers wasn't just limited to neural networks in high dimensions, but could also be applicable in lower dimensions. However, I probably should've added a section about saddle points to make this part of the video more thorough, so I really appreciate the feedback!
@cmilkau
@cmilkau Месяц назад
What is the square of the gradient?
@sourishk07
@sourishk07 Месяц назад
Sorry, maybe I'm misinterpreting your question, but just to clarify the RMSProp optimizer: After the gradient term is calculated during backpropagation, you take the element-wise square of it. These values help determine by how much to modulate the learning rates individually for each parameter! The reason squaring is useful is because we don't actually care about the sign, but rather just the magnitude. Same concept applies to Adam. Let me know if that answers your question.
@alexanderwoxstrom3893
@alexanderwoxstrom3893 16 дней назад
Sorry did I misunderstand something or did you say SGD when it was only GD you talked about? When was stochastic elements discussed?
@sourishk07
@sourishk07 14 дней назад
I guess technically I didn’t talk about how the dataset was batched when performing GD, so no stochastic elements were touched upon. However, I just used SGD as a general term to talk about vanilla gradient descent, like how PyTorch and Tensorflow’s APIs are structured.
@alexanderwoxstrom3893
@alexanderwoxstrom3893 14 дней назад
@@sourishk07 I see! It would be interesting to see if/how the stochastic element helps with the landscape l(x, y) = x^2 + a|y| or whatever that example was :)
@sourishk07
@sourishk07 2 дня назад
If you're interested, consider playing around with batch & mini batch gradient descent! There's been a lot of research on how batch size affects convergence so it might be a fun experiment to try out.
@bionh
@bionh Месяц назад
Make one about the Sophia optimizer please!
@sourishk07
@sourishk07 Месяц назад
I'm currently consolidating a list of more advanced optimizers for a follow up video so I really appreciate the recommendation. I'm adding it to the list!
@David-lp3qy
@David-lp3qy 25 дней назад
1.54k subs it's crazy low for this quality remember me when you make it my boy
@sourishk07
@sourishk07 24 дня назад
Thank you for those kind words! I'm glad you liked the video
@cmilkau
@cmilkau Месяц назад
Hmm while rmsprop speeds up the demonstrated example, it slows down the first example.
@sourishk07
@sourishk07 Месяц назад
Are you referring to my animated examples or the benchmarks towards the end of the video? The animations were contrived just to showcase each optimizer, but the performance of RMSProp during the benchmarks at the end vary based on the domain. It actually sometimes manages to beat Adam as we saw in the research papers! This is where experimentation might be worthwhile depending on what resources are available to you.
@ferlaa
@ferlaa 17 дней назад
The intuition behind why the methods help with convergence is a bit misleading imo. The problem is not in general with slow convergence close to optimum point because of a small gradient, that can easily be fixed with letting step size depend on gradient size. The problem that it solves is when the iterations zig-zag because of large components in some directions and small components in the direction you actually want to move. By averaging (or similar use of past gradients) you effectively cancel out the components causing the zig-zag.
@sourishk07
@sourishk07 17 дней назад
Hello! Thanks for the comment. Optimizers like RMSProp and Adam do make step size dependent on gradient size, which I showcase in the video, so while there are other techniques to deal with slow convergence close to the optimum point due to small gradients, having these optimizers still help. Maybe I could've made this part clearer though. Also, from my understanding, learning rate decay is a pretty popular technique used so wouldn't that just slow down convergence even more as the learning rate decays & the loss approaches the area with smaller gradients? However, I definitely agree with your bigger point about these optimizers from preventing the loss from zig-zagging! In my RMSProp example, I do show how the loss is able to take a more direct route from the starting point to the minimum. Maybe I could've showcased a bigger example where SGD zig-zags more prominently to further illustrate the benefit that RMSProp & Adam bring to the table. I really appreciate you taking the time to give me feedback.
@ferlaa
@ferlaa 16 дней назад
@@sourishk07 Yeah, I absolutely think the animations give good insight into the different strategies within "moment"-based optimizers. My point was more that even with "vanilla" gradient descent methods, the step sizes can be handled to not vanish as the gradient gets smaller, and that real benefit of the other methods is for altering the _direction_ of descent to deal with situations where eigenvalues of the (locally approximate) quatratic form differs in orders of magnitude. But I must also admit that (especially in the field of machine learning) the name SGD seem to be more or less _defined_ to include a fixed decay rate of step sizes, rather than just the method of finding a step direction (where finding step sizes would be a separate (sub-)problem), so your interpretation is probably more accurate than mine. Anyway, thanks for replying and I hope you continue making videos on the topic!
@sourishk07
@sourishk07 2 дня назад
Thanks for sharing your insights! I'm glad you enjoyed the video. Maybe I could make a video that dives deeper into step sizes or learning rate decay and the role that they play on convergence!
@pedrogorilla483
@pedrogorilla483 Месяц назад
Have you seen KAN?
@sourishk07
@sourishk07 Месяц назад
I have not heard of that! I'd love to learn more though
@LouisChiaki
@LouisChiaki Месяц назад
A lot of times in academia, people are just using SGD with momentum but playing around with learning rate scheduling a lot. You don't always want to get the deepest minimum since it can actually give you poor generalizability. That's why Adam isn't that popular when researchers are trying to push to SOTA.
@sourishk07
@sourishk07 28 дней назад
Hi! I can only speak to the papers that I've read, but I still seem to see Adam being used a decent amount. Your point about overfitting is valid, but wouldn't the same thing be achieved by using Adam but just training for less iterations?
@jesterflint9404
@jesterflint9404 Месяц назад
Like for the title. :)
@sourishk07
@sourishk07 Месяц назад
Haha I'm glad you liked it!
@Higgsinophysics
@Higgsinophysics 24 дня назад
love that title haha
@sourishk07
@sourishk07 24 дня назад
Haha thank you!
@UQuark0
@UQuark0 Месяц назад
Please explain one thing to me. Why do we negate the gradient vector to get the downhill direction? What if directly opposite to the gradient vector there's a bump instead of a smooth descent? Shouldn't we instead negate the parameter field itself, transforming holes into bumps, and then calculating the gradient?
@sourishk07
@sourishk07 Месяц назад
Hi, that's a great question! Remember, what the gradient gives is a vector of the "instantaneous" rate of change for each parameter at the current location of the cost function. So if there is a bump 1 epsilon (using an arbitrarily small number as a unit) or 10 epsilons away, our gradient vector has no way of knowing that. What you'll see is that if you negate the entire cost function (which is what I'm assuming you meant by 'parameter field') and perform gradient ascent rather than gradient descent, you'll end up with the exact same problem: "What happens if there is a tiny divot in the direction of steepest ascent?" At the end of the day, no cost function in the real world will be as smooth and predictable as the ones I animated. There will always be small bumps and divots along the way, which is the entire point of using more advanced optimizers like RMSProp or Adam because we're hoping that they're able to circumvent these small obstacles and still reach a global minima!
@masterycgi
@masterycgi 19 дней назад
why not using a metaheuristic approach?
@sourishk07
@sourishk07 17 дней назад
Hi! There seems to be many interesting papers about using metaheuristic approaches with machine learning, but I haven't seen too many applications of them in industry. However, this is a topic I haven't looked too deeply into! I simply wanted to discuss the strategies that are commonly used by modern day deep learning and maybe I'll make another video about metaheuristic approaches! Thanks for the idea!
@LokeKS
@LokeKS 13 дней назад
is the code available?
@sourishk07
@sourishk07 3 дня назад
Unfortunately, the code for the animations are not ready for the public haha. It's wayyy too messy. However, I didn't include the code for the optimizers because the equations are straight forward to implement, but how you use the gradients to update weights depends greatly on how the rest of the code is structured.
@renanmonteirobarbosa8129
@renanmonteirobarbosa8129 Месяц назад
Who is Adam is what sold me hahahahahaha
@sourishk07
@sourishk07 Месяц назад
LOL I'm glad you liked the title. I feel like it wrote itself though haha
@renanmonteirobarbosa8129
@renanmonteirobarbosa8129 Месяц назад
@@sourishk07 I loved your animations, it is well presented. Are you planning on sharing a little insight on making those ? I feel in academia the biggest challenge for us is to communicate in an engaging way
@sourishk07
@sourishk07 Месяц назад
@@renanmonteirobarbosa8129 Hi, thanks for the idea! I want to get a little bit better at creating them before I share how I create them. But I used manimgl so I encourage you to check that out in the meantime!
@mynameis1261
@mynameis1261 5 дней назад
10:19 What a weird formula for NAG! It's much easier to remember a formulation where you always take antigradient. You want to *add* velocity and take gradient with *minus* . The formula just changes to V_t+1 = b V_t - a grad(Wt + b V_t) W_t+1 = W_t + V_t+1 It's more intuitive and more similar to standard GD. Why would anyone want to change these signs? How often do you subtract velocity to update the position? Do you want to *add* gradient to update V right after you explained we want to subtract gradient in general to minimize the loss function? It makes everything twice as hard and just... wtf...
@sourishk07
@sourishk07 3 дня назад
Hi! Thanks for bringing this up! I've seen the equation written in both forms, but probably should've elected for the one suggested by you! This is what I was referring to for the equation: www.arxiv.org/abs/1609.04747
@gemini_537
@gemini_537 20 дней назад
Gemini 1.5 Pro: This video is about optimizers in machine learning. Optimizers are algorithms that are used to adjust the weights of a machine learning model during training. The goal is to find the optimal set of weights that will minimize the loss function. The video discusses four different optimizers: Stochastic Gradient Descent (SGD), SGD with Momentum, RMSprop, and Adam. * Stochastic Gradient Descent (SGD) is the simplest optimizer. It takes a step in the direction of the negative gradient of the loss function. The size of the step is determined by the learning rate. * SGD with Momentum is a variant of SGD that takes into account the history of the gradients. This can help the optimizer to converge more quickly. * RMSprop is another variant of SGD that adapts the learning rate for each parameter of the model. This can help to prevent the optimizer from getting stuck in local minima. * Adam is an optimizer that combines the ideas of momentum and adaptive learning rates. It is often considered to be a very effective optimizer. The video also discusses the fact that different optimizers can be better suited for different tasks. For example, Adam is often a good choice for training deep neural networks. Here are some of the key points from the video: * Optimizers are algorithms that are used to adjust the weights of a machine learning model during training. * The goal of an optimizer is to find the optimal set of weights that will minimize the loss function. * There are many different optimizers available, each with its own strengths and weaknesses. * The choice of optimizer can have a significant impact on the performance of a machine learning model.
@sourishk07
@sourishk07 19 дней назад
Thank you Gemini for watching, although I'm not sure you learned anything from this lol
@ajinkyaraskar9031
@ajinkyaraskar9031 16 дней назад
what a title 😂
@sourishk07
@sourishk07 14 дней назад
Appreciate the visit!
@Aemond-qj4xt
@Aemond-qj4xt Месяц назад
the title made me laugh i had to click this
@sourishk07
@sourishk07 Месяц назад
Haha I appreciate it. Thanks for the visit!
@cyrillebournival2328
@cyrillebournival2328 Месяц назад
Adam has parkingson
@sourishk07
@sourishk07 Месяц назад
I'm not sure I understand haha
@MDNQ-ud1ty
@MDNQ-ud1ty Месяц назад
Are you real? I have a feeling you are AI video generation hooked up to a LLM that varies a script that the MIC uses to steer humanity to building it's AI god.
@sourishk07
@sourishk07 Месяц назад
LMAO dw I'm very much real. I recently graduated college and am currently living in the Bay Area working at TikTok!
@aouerfelli
@aouerfelli Месяц назад
Consider watching at 0.75 speed.
@sourishk07
@sourishk07 Месяц назад
Hi, thanks for the feedback. I'll make sure to take things a little slower next time!
@aouerfelli
@aouerfelli Месяц назад
@@sourishk07 Maybe it's just me, check other peoples' feedbacks on the matter.
@XetXetable
@XetXetable Месяц назад
It really annoys me when people claim using less electricity has anything to do with environmentalism. Most large ML models are trained in the Pacific North West (Seattle and Vancouver area) where most power comes from hydroelectric. Using more electricity has no meaningful environmental impact there since it's just diverting the energy that rivers are dumping in the ocean anyway. If you're worried about the environmental impact of energy, focus on the generation methods, not the consumption methods.
@patrickjdarrow
@patrickjdarrow Месяц назад
Demand influences consumption (and consumption methods) so decoupling them as you suggest is naive
@sourishk07
@sourishk07 Месяц назад
I appreciate you bringing up this point. But at the end of the day, the Seattle/Vancouver area isn't enough to handle the entire world's demand for electricity, especially with the additional burden that training large ML models bring. Not to mention, at the point where all of our electricity isn't being derived from green sources, it doesn't matter if training jobs for ML models get their energy solely from green sources because that demand is still competing with other sectors of consumption. While there remains a lot of work left in optimizing our hardware to run more efficiently, there is no harm in optimizing our algorithms to use less resources in the meantime.
@rudypieplenbosch6752
@rudypieplenbosch6752 Месяц назад
I gave a like, until the environmental nonsense came up, just stick to the topic, no virtue signalling wanted.
@YouAreTheRaidBoss
@YouAreTheRaidBoss Месяц назад
@@rudypieplenbosch6752 if you think addressing environmental issues is environmental nonsense you need to get your head out of your --- and read a scientific paper on climate change. And if you don’t believe in that, publish some proof or kindly get out of the scientific discourse, that i.e. this video is. Thank you!
@patrickjdarrow
@patrickjdarrow Месяц назад
@@rudypieplenbosch6752 rounded discussion is not exclusive to virtue signaling
Далее
The Most Important Algorithm in Machine Learning
40:08
Просмотров 243 тыс.
Adam Optimizer Explained in Detail | Deep Learning
5:05
I Built 7 EXTREME Rooms in My House!
1:22:07
Просмотров 11 млн
I Made a Neural Network with just Redstone!
17:23
Просмотров 441 тыс.
And this year's Turing Award goes to...
15:44
Просмотров 97 тыс.
Why Does Diffusion Work Better than Auto-Regression?
20:18
This is What Limits Current LLMs
7:05
Просмотров 83 тыс.
Mapping GPT revealed something strange...
1:09:14
Просмотров 174 тыс.
I Built 7 EXTREME Rooms in My House!
1:22:07
Просмотров 11 млн