Тёмный

This is the Math You Need to Master Reinforcement Learning 

ritvikmath
Подписаться 164 тыс.
Просмотров 10 тыс.
50% 1

Опубликовано:

 

11 сен 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 27   
@drewgrant1605
@drewgrant1605 23 дня назад
Just subscribed! I love the level you teach at in your videos. It’s slightly above the level of Statsquest but not too dense that I need to mentally prepare before watching. (No shade to Statsquest, two random events can be independently great).
@xnairegodking
@xnairegodking Месяц назад
wow.. Best explanation ever. I think if you made a course of this it would be the best out there . Thanks for sharing
@lial4633
@lial4633 4 месяца назад
The best explanation of Policy Gradient methods I've seen!
@sharks1349
@sharks1349 10 месяцев назад
I've been trying to understand Reinforcement learning and policy gradient methods always tripped me up. Thank you for making this video
@souravdey1227
@souravdey1227 3 месяца назад
Can you please make a full playlist on reinforcement learning. No one explains the math stuff as simply as you do. Also please do a separate video going into greater mathematical detail proving the theorem, kind of like numberphile2
@buumschakalaka4425
@buumschakalaka4425 10 месяцев назад
Thanks for the great video 💪👏 will there be more RL videos coming? E.g I would like to understand more how to set up reward functions. How do I weight rewards from different actions against each other? Also how would we set up the environment in the model based approach? And more
@avandfardi
@avandfardi 5 месяцев назад
What a beautiful explanation. Thank you
@ritvikmath
@ritvikmath 5 месяцев назад
You are very welcome
@matthewchunk3689
@matthewchunk3689 10 месяцев назад
Great summary! As good as LLMs are answering question, we still need smart people like you to get us thinking of the right questions.
@ritvikmath
@ritvikmath 10 месяцев назад
Thanks!
@tantzer6113
@tantzer6113 10 месяцев назад
LLMs are pretty bad at answering questions.
@HemantPoonia-wq8hr
@HemantPoonia-wq8hr 10 месяцев назад
Hey can you please upload videos on causal analysis or can you suggest some books to get started with it
@matteogirelli1023
@matteogirelli1023 10 месяцев назад
You mean causal inference? I suggest you to refer to econometrics textbooks, as in Economics we are pretty strong on that. - "Mostly harmless econometrics" by Pishke for a graduate level in applied stats (pure stats would find it undergrad level) - "Econometric analysis of cross-section and panel data" by Wooldridge
@HemantPoonia-wq8hr
@HemantPoonia-wq8hr 10 месяцев назад
@@matteogirelli1023 what do you suggest for someone who want to causal inference to my domain of problem taht is climate science and earth science i just got started by reading causality by judea pearl
@djpremier333
@djpremier333 10 месяцев назад
Statistical rethinking introduces it nicely, the whole lecture is on yt
@HemantPoonia-wq8hr
@HemantPoonia-wq8hr 10 месяцев назад
@@djpremier333 thank you i will check their playlist
@weslleys.pereira6998
@weslleys.pereira6998 4 месяца назад
Great video! Thanks for sharing. I have a question though. I am new to the subject, so I am having trouble to understand the last step in your derivation (29:00). I am speaking about the "very very easy thing to do". Would you be kind enough to point me to where I can find more information about that? Thanks!
@Mars.2024
@Mars.2024 4 месяца назад
Thanks A million 🎉
@pushkarparanjpe
@pushkarparanjpe 9 месяцев назад
Awesome explanation! Thanks.
@user-co6pu8zv3v
@user-co6pu8zv3v 10 месяцев назад
Thank you! :)
@subhamkundu5043
@subhamkundu5043 10 месяцев назад
Great summary. I have a question why the state is not dependent on the reward?
@brycerogers5050
@brycerogers5050 10 месяцев назад
Thanks Ritvik. Still having some trouble with why Reward (R) does not depend on Theta, mathematically - in your tree diagram, all the rewards are +-1/p, dependent (in their absolute quantity) only on the state, but also dependent (in their sign, which seems non-negligible in a reward system) on a Theta-based choice (H or L). Are you able to describe in a different way the intuition behind why d/dTheta logP(R,S|S,A) = 0?
@brycerogers5050
@brycerogers5050 10 месяцев назад
Maybe better said is: what's the nuance (or obvious principle) that allows consideration of an explicit variable in a derivative wrt that variable, but disallows consideration of an implicit variable (ie further back in the causal chain) in a derivative wrt to that variable? Thanks again, your channel rocks!
@ritvikmath
@ritvikmath 10 месяцев назад
Hey! Excellent question and it isn’t obvious in any sense. The key lies in the fact that this is a conditional probability rather than an unconditional one. If we removed the conditions on P(R,S | S,A) so it is just P(R,S) then this absolutely does depend on the policy theta and we can measure this dependency by tracing through the causal diagram. However by using a conditional probability we assume that the previous state and previous action are taken as given, at which point the probabilities for the next state and next reward are fixed and do not depend on the policy. Please let me know if that helps!
@brycerogers5050
@brycerogers5050 10 месяцев назад
@@ritvikmath Ah, that makes perfect sense. Thank you.
@adityabhatt3519
@adityabhatt3519 7 месяцев назад
Hi, I've trying to use multiple sources to look for the proof of this theorem. However, none of them use product rule (for derivative, time: 20.22). Can you please share with me if you know of a resource which does include the product rule.
@user-ed1ph7yj6o
@user-ed1ph7yj6o 5 месяцев назад
can you do rstan on R for Basyan stats case by case
Далее
Китайка и Зеленый Слайм😂😆
00:20
Вопрос Ребром - Булкин
59:32
Просмотров 953 тыс.
Is the Future of Linear Algebra.. Random?
35:11
Просмотров 283 тыс.
The hardest problem on the hardest test
11:15
Просмотров 15 млн
Brutally Honest Advice For Young Men - Robert Greene
8:41
Support Vector Machines: All you need to know!
14:58
Просмотров 144 тыс.
Watching Neural Networks Learn
25:28
Просмотров 1,3 млн
The Key Equation Behind Probability
26:24
Просмотров 81 тыс.
How To Solve Any Problem
8:30
Просмотров 162 тыс.
Китайка и Зеленый Слайм😂😆
00:20