Тёмный

Reinforcement Learning 10: Classic Games Case Study 

Google DeepMind
Подписаться 506 тыс.
Просмотров 42 тыс.
50% 1

Опубликовано:

 

11 окт 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 33   
@LuisYu
@LuisYu 5 лет назад
amazing high quality lectures. especially enjoyed attention, memory, alpha zero talks.
@Kingstanding23
@Kingstanding23 5 лет назад
A Nash equilibrium sounds like what happens on roads where traffic evens itself out amongst all the roads towards some destination. When a new road is built, nothing really changes because the traffic just redistributes itself to an new equilibrium.
@LucyRockprincess
@LucyRockprincess Год назад
great real life analogy
@stevecarson7031
@stevecarson7031 3 года назад
Thankyou so much for this series of lectures!
@samagrasharma7755
@samagrasharma7755 5 лет назад
Two lectures (CNN and RNN) are missing from this series. Can anyone tell if they are available online?
@TheGreatBlackBird
@TheGreatBlackBird 3 года назад
Shouldn't there also be a reward present in TD error at 42:30 and 50:25 ? edit: ok, it's explained a bit more in the 2015 lecure that this version assumes no intermediate reward
@alexanderyau6347
@alexanderyau6347 5 лет назад
I can comment now. See you again David.
@helinw
@helinw 4 года назад
Did David do another RL course in 2018? Or just one lecture?
@ShortVine
@ShortVine 3 года назад
i was thinking the same & searched a lot, but i think he did just one lecture in 2018
@Dina_tankar_mina_ord
@Dina_tankar_mina_ord 5 лет назад
I would love to see how deepmind would build a city on its own in Cityskyline. See how its optimization would create the best and most efficient layout in real time. Maybe we could learn alot from that.
@johangras3522
@johangras3522 5 лет назад
It is possible to access to the course slides ?
@TuhinChattopadhyay
@TuhinChattopadhyay 4 года назад
@@Sigmav0 Link not working
@Sigmav0
@Sigmav0 4 года назад
@@TuhinChattopadhyay The slide has been moved to www.davidsilver.uk/wp-content/uploads/2020/03/games.pdf Hope this helps !
@TuhinChattopadhyay
@TuhinChattopadhyay 4 года назад
@@Sigmav0 Got it... many thanks
@Sigmav0
@Sigmav0 4 года назад
@@TuhinChattopadhyay No problem ! 👍
@dojutsu6861
@dojutsu6861 4 года назад
@@Sigmav0 these slides are from an older UCLxDeepMind lecture series lead primarily by David Silver. They do not include content on the newer AlphaZero models. Do you by any chance know if these updated slides are available online
@domino14
@domino14 2 года назад
The level of computer play in Scrabble is not superhuman. Quackle beats Maven, and the best humans can 50-50 Quackle in a long series.
@jakubbielan4784
@jakubbielan4784 5 лет назад
Anyone know what was the exact hardware used to train Alpha Go Zero?
@luisbanegassaybe6685
@luisbanegassaybe6685 5 лет назад
deepmind.com/blog/alphago-zero-learning-scratch/
@mohammadkhan5430
@mohammadkhan5430 4 года назад
I love him, how sad the room is empty
@KayzeeFPS
@KayzeeFPS 4 года назад
Here's a link to the same video but with slides visible ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-N1LKLc6ufGY.html
@julioandresgomez3201
@julioandresgomez3201 5 лет назад
Despite the success of A 0 nets in several games, I feel that is better starting point playing (random number) games with humans. Only then, when it has grasped some basic basics (by itself, not forcibly inserted by hand), let it play against itself. This way it could accomplish in thousands of self-play games what from scratch it´d take millions of self-play games, due to the total randomness and clueless of the first games. It´s not the absolute zero approach, but it has no "artificial" parameters handcrafted either. It learns from its own games all the way.
@Avandale0
@Avandale0 4 года назад
Playing with humans takes considerably more time than running simulations - so actually, playing millions of games by itself is still faster than playing 100 games from playing humans. Knowing that a game of go takes around 1h, you'd have finished 3 games with a human in the time that it took AlphaZero to reach human level play. Same for chess, when you realise it took Alpha Zero 4 hours reach a level higher than Stockfish... It should be clear from these examples that one of the particularities of AlphaZero is the speed at which it learns. Playing humans here both defeats the purpose of self-learning and actually wastes time.
@yidingyu2739
@yidingyu2739 5 лет назад
Why so many empty seats?
@yoloswaggins2161
@yoloswaggins2161 5 лет назад
This stuff not on the exam
@matveyshishov
@matveyshishov 5 лет назад
The number of people is lower with later lectures for some reason.
@markdonald4538
@markdonald4538 5 лет назад
@@matveyshishov stupid ppl
@omarcusmafait7202
@omarcusmafait7202 5 лет назад
why does nobody take notes?
@yoloswaggins2161
@yoloswaggins2161 5 лет назад
Not on the exam
@Sigmav0
@Sigmav0 5 лет назад
@William Davis Sure... In primary school...
@vijayabhaskar-j
@vijayabhaskar-j 5 лет назад
because slides are available online and lectures are available online, I would listen carefully first in the class
Далее
Deep Learning 7. Attention and Memory in Deep Learning
1:40:19
Which version is better?🎲
00:14
Просмотров 2,6 млн
TD Learning - Richard S. Sutton
1:26:25
Просмотров 18 тыс.
Cosmology Lecture 1
1:35:47
Просмотров 1,1 млн
A History of Reinforcement Learning - Prof. A.G. Barto
31:50
The First Neural Networks
18:52
Просмотров 87 тыс.
RL Course by David Silver - Lecture 5: Model Free Control
1:36:31