Тёмный

Stanford CS234: Reinforcement Learning | Winter 2019 | Lecture 2 - Given a Model of the World 

Stanford Online
Подписаться 605 тыс.
Просмотров 202 тыс.
50% 1

For more information about Stanford’s Artificial Intelligence professional and graduate programs, visit: stanford.io/ai
Professor Emma Brunskill, Stanford University
stanford.io/3eJW8yT
Professor Emma Brunskill
Assistant Professor, Computer Science
Stanford AI for Human Impact Lab
Stanford Artificial Intelligence Lab
Statistical Machine Learning Group
To follow along with the course schedule and syllabus, visit: web.stanford.edu/class/cs234/i...
0:00 Introduction
2:55 Full Observability: Markov Decision Process (MDP)
3:55 Recall: Markov Property
4:50 Markov Processor Markov Chain
5:53 Example: Mars Rover Markov Chain Transition Matrix, P
12:06 Example: Mars Rover Markov Chain Episodes
13:05 Markov Reward Process (MRP)
14:37 Return & Value Function
16:32 Discount Factor
18:23 Example: Mars Rover MRP
23:19 Matrix Form of Bellman Equation for MRP
26:52 Iterative Algorithm for Computing Value of a MRP
33:29 MDP Policy Evaluation, Iterative Algorithm
34:44 Policy Evaluation: Example & Check Your Understanding
36:39 Practice: MDP 1 Iteration of Policy Evaluation, Mars Rover Example
50:48 MDP Policy Iteration (PI)
55:44 Delving Deeper into Policy Improvement Step

Опубликовано:

 

28 июн 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 17   
@mohammadrezanargesi2439
@mohammadrezanargesi2439 Год назад
Thank you for sharing the contents
@pierrecurie
@pierrecurie Год назад
25:47 Conjecture: inverse exists if gamma in [0,1), and fails to exist if gamma=1. Easy to check for 1 or 2 state systems.
@moritzbroesamle4566
@moritzbroesamle4566 5 месяцев назад
True, for gamma < 1 the matrix is strictly diagonally dominant, thus invertible
@meetsaiya5007
@meetsaiya5007 2 года назад
About the gammas being in GP has a very good interpretation in finance and I believe it stems from there and is just not mathematical. It does have some mathematical properties though. It's to do with interest which means if we earn 1 now and there's 10% interest, then after 1 year the it is 1.1 which means if after 1 year if I am earning 1, it is equivalent to earning 0.909 now and since interest are always in 10 to 20 25% range ballpark, this gives us rough values of gamma as 0.8 to 0.9 or so. A gamma of 0.5 would mean I would leverage the reward such that it would double in following time step. This is compounded over time and that is how it's a GP. However, this would imply if I have a reward on 1 this year, I can leverage it over following years (collect interest) which seems reasonable to think in terms of learning from experience early on in a sense... However this is my understanding and might be biased..
@meetsaiya5007
@meetsaiya5007 2 года назад
Can the common or good questions of piazza be put up somewhere to refer to?
@John83118
@John83118 4 месяца назад
I'm under its spell. I had the pleasure of reading something similar, and I was under its spell. "The Art of Saying No: Mastering Boundaries for a Fulfilling Life" by Samuel Dawn
@user-di5kn1wu6w
@user-di5kn1wu6w 11 месяцев назад
we said if policy is deterministic we can simplify value function to Vπk(s) = r(s, π(s)) + γXs0∈Sp(s0|s, π(s))Vπk−1(s0) but how we can write max(a) Q(s,a) >= V(s) when policy is deterministic and we can choose just one action?
@arpitqw1
@arpitqw1 Год назад
How return function is different from value function ? How come return will be different from value function when process is not stochastic .( both having sum of reward )
@adityanarendra5886
@adityanarendra5886 Год назад
What is the tool that Prof Emma is using for the presentation and annotation, it looks really helpful?
@gravitas8297
@gravitas8297 Год назад
Beamer? I guess
@adityanarendra5886
@adityanarendra5886 Год назад
@@gravitas8297 Does beamer allow annotation ? I thought it was a latex class for making presentations ? I wanted to know the annotation tool she is using for iPad. That would be really helpful .
@gravitas8297
@gravitas8297 Год назад
@@adityanarendra5886 Err I haven't tried that sorry :(
@muhammadhassanshakeel7544
@muhammadhassanshakeel7544 Год назад
Does anybody understand how did she get to 2nd step of the equation on 1:11:56?
@kaiqizhang6524
@kaiqizhang6524 Год назад
We dont care about a or a'. Suppposed that BV_k >= BV_j, a_j = a' making the maximum of BV_j. When a_j = a, we get BV_j{a_j=a}
@user-cy4wb8eo1v
@user-cy4wb8eo1v Год назад
47:13 Someone just asked what I wanted to! 😂
@marciamarquene5753
@marciamarquene5753 6 месяцев назад
V tú e horário normal e o valor da entrada e o valor e horário normal e o valor da taxa de ontem e o valor e horário normal e
@marciamarquene5753
@marciamarquene5753 6 месяцев назад
G o resto x ela quiser vir me CP g vi agora só r ela e e horário então só r r viu se ela quiser e te amo e o valor e horário da manhã r viu se e o valor e horário
Далее
Never waste PASTA SAUCE @itsQCP
00:19
Просмотров 4,3 млн
Markov Chains Clearly Explained! Part - 1
9:24
Просмотров 1,1 млн
MIT Introduction to Deep Learning | 6.S191
1:09:58
Просмотров 312 тыс.
Markov Decision Processes - Computerphile
17:42
Просмотров 160 тыс.
Stanford CS25: V4 I Hyung Won Chung of OpenAI
36:31
Просмотров 92 тыс.
AI Olympics (multi-agent reinforcement learning)
11:13
ML Was Hard Until I Learned These 5 Secrets!
13:11
Просмотров 218 тыс.
Lecture 1 | The Theoretical Minimum
1:46:33
Просмотров 831 тыс.
Classical Mechanics | Lecture 1
1:29:11
Просмотров 1,4 млн
An introduction to Reinforcement Learning
16:27
Просмотров 644 тыс.