Тёмный

Dueling Deep Q Learning with Tensorflow 2 & Keras | Full Tutorial for Beginners 

Machine Learning with Phil
Подписаться 42 тыс.
Просмотров 14 тыс.
50% 1

Опубликовано:

 

8 сен 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 41   
@MachineLearningwithPhil
@MachineLearningwithPhil 4 года назад
If you like my RU-vid content, you'll love my courses. I'm almost always running a sale, so check the links in the description for the best price!
@dmitriys4279
@dmitriys4279 4 года назад
are you working on your next udemy course? :) this one is already purchased and learned
@MachineLearningwithPhil
@MachineLearningwithPhil 4 года назад
Dmitriy, the new course just dropped! Check it out at www.udemy.com/course/actor-critic-methods-from-paper-to-code-with-pytorch/?couponCode=MAY-20-1
@peterpirog5004
@peterpirog5004 4 года назад
How can I load weights if I would like to continue learning? I try to use "agent.load_model('dueling_dqn_trained.h5')" where contains data from previous training. I get error: "TypeError: load_model() takes 1 positional argument but 2 were given" Very nice tutorial, Thank You
@kozyrozy
@kozyrozy 3 года назад
Very much thanks! Also, occasional jokes is a nice touch~
@tejasjain7895
@tejasjain7895 4 года назад
This is the best tutorial I could find on duelling deep q network. Rest all the tutorials on it suck.
@mohammedal-saffar8122
@mohammedal-saffar8122 4 года назад
Hi Dr. Phil, I'm a Ph.D. student and I’m really grateful beyond measure for your such great work that did tremendous support to my Ph.D.! I'm really looking for a multi-agent DQN, if that is possible for you to be offered in youtube, please
@jadersantos238
@jadersantos238 4 года назад
Thanks for this amazing video. A huge from Brazil :)
@MachineLearningwithPhil
@MachineLearningwithPhil 4 года назад
My pleasure Jader, thanks for watching!
@mikemihay
@mikemihay 4 года назад
What an awesome tutorial! Thanks!
@MachineLearningwithPhil
@MachineLearningwithPhil 4 года назад
Glad it was helpful!
@ChristFan868
@ChristFan868 4 года назад
Thanks Phil!!!
@richardcsanaki5531
@richardcsanaki5531 4 года назад
Great tutorial Phil! I have a question though: if I want to load and use (call) the saved model, how would I do that? Thanks in advance!
@geo2073
@geo2073 4 года назад
Thank you Phil!
@MachineLearningwithPhil
@MachineLearningwithPhil 4 года назад
Thanks for watching, George!
@dmitriys4279
@dmitriys4279 4 года назад
Hi, Phil Thank you for you videos, great job Could you please show how to create custom environment, for example chess or any other board game with 2 or more agents. Could you please show how implement action space in board games like chess, and how to develop and train agents against each other Thank you, Phil and good luck
@JousefM
@JousefM 4 года назад
Watching this later, quite late here. 😄
@felixnica762
@felixnica762 4 года назад
I didn't see you turning eager execution off... is it working better now?
@MachineLearningwithPhil
@MachineLearningwithPhil 4 года назад
Yes, for some reason this seems to be running faster. I have no clue why or what is going on. Great question.
@felixnica762
@felixnica762 4 года назад
@@MachineLearningwithPhil Thank you for the reply, I've just finished implementing your code with a few modifications and it looks like the Dueling agent learns faster than the standard DDQN... Very nice!
@tejasjain7895
@tejasjain7895 4 года назад
@Machine Learning with Phil Can you also make a duelling deep q network using convnets in keras. I am getting a few errors. I tried making on my own but I have failed in it. I would like to see your code if you release it.
@nate7368
@nate7368 4 года назад
This is great, Phil! I would love to see a video on policy gradients with tf2.0! Any chance that’s in the pipeline?
@MachineLearningwithPhil
@MachineLearningwithPhil 4 года назад
I'll see what I can do, but for now the plan is to do bigger and better things. I've been stagnant for a while so it's time to up my game :)
@nate7368
@nate7368 4 года назад
@@MachineLearningwithPhil Totally understand. Well I hope to see a video on PG w/ TF2 in the future. Thanks again for your videos and effort to teach others RL!
@prashantsharmastunning
@prashantsharmastunning 3 года назад
hey can anyone help, collab is not running the code on GPU. GPU utilization is 0. (i have set the notebook environment to gpu)
@utkucicek6664
@utkucicek6664 2 года назад
Why is that the call() function created under the DuelingDDQN class is never used in the training section?
@DavidGallo747
@DavidGallo747 2 года назад
It is used implicitly. It overwrites the call() function inherited from the Keras.Model class.
@andreasv9472
@andreasv9472 4 года назад
also doing your udemy course, thanks mate. 2 stupid questions. Is the replay buffer same thing as episodic memory? Or is it the same as a LSTM? I'm trying to understand agent 57. I saw someone saying that episodic memory is basically two nested GRU's - is that what the replay buffer is?
@MachineLearningwithPhil
@MachineLearningwithPhil 4 года назад
You're mixing concepts. The replay buffer just keeps track of the agent's experience so it can learn from it later. Some people use RNN/LSTM in their network architecture, but I haven't done so.
@miguelangelquicenohincapie2768
@miguelangelquicenohincapie2768 4 года назад
I have tensorflow 1.14 gpu..... Can i run the same code?
@majdwardeh3698
@majdwardeh3698 4 года назад
Thank you Phil! One question, is this a VIM editr?
@MachineLearningwithPhil
@MachineLearningwithPhil 4 года назад
Yes, it is!
@majdwardeh3698
@majdwardeh3698 4 года назад
@@MachineLearningwithPhil Neovim?
@MachineLearningwithPhil
@MachineLearningwithPhil 4 года назад
Just regular vim, on Ubuntu 18.04 if you're curious. You a fellow Linux user?
@majdwardeh3698
@majdwardeh3698 4 года назад
@@MachineLearningwithPhil I though you were using Windows. Yes I am! But right now I am working from home using Windows and I am missing the regular vim.
@JousefM
@JousefM 4 года назад
Sorry for commenting again Phil. Tried contacting you via email (not sure if you saw it) - if you're open for a podcast about your work and exposing your expertise to the world, would be happy to have you in one of the next episodes :) (not trying to spam here btw :D) Thanks in advance mate!
@MachineLearningwithPhil
@MachineLearningwithPhil 4 года назад
Jousef! I haven't seen it yet. I'll check that email account tonight and get back to you. Thanks!
@MachineLearningwithPhil
@MachineLearningwithPhil 4 года назад
Which email did you send it to? I can't find it :(
@JousefM
@JousefM 4 года назад
@@MachineLearningwithPhil It was phil@neuralnet.ai - hope that's correct? :D
Далее
🛑 ты за кого?
00:11
Просмотров 92 тыс.
ПРИКОЛЫ НАД БРАТОМ #shorts
00:23
Просмотров 3,6 млн
How A Physicist Learns Hard Subjects
14:18
Просмотров 10 тыс.
Multi-Agent Hide and Seek
2:58
Просмотров 10 млн
Dueling Double Deep Q Learning is Easy in PyTorch
52:41