Here are some time stamps folks! Intro 00:00:00 Intro to Deep Q Learning 00:01:30 How to Code Deep Q Learning in Tensorflow 00:08:56 Deep Q Learning with Pytorch Part 1: The Q Network 00:52:03 Deep Q Learning with Pytorch part 2: Coding the Agent 01:06:21 Deep Q Learning with Pytorch part 3: Coding the main loop 01:28:54 Intro to Policy Gradients 01:46:39 How to Beat Lunar Lander with Policy Gradients 01:55:01 How to Beat Space Invaders with Policy Gradients 02:21:32 How to Create Your Own Reinforcement Learning Environment Part 1 02:34:41 How to Create Your Own Reinforcement Learning Environment Part 2 02:55:39 Fundamentals of Reinforcement Learning 03:08:20 Markov Decision Processes 03:17:09 The Explore Exploit Dilemma 03:23:02 Reinforcement Learning in the Open AI Gym: SARSA 03:29:19 Reinforcement Learning in the Open AI Gym: Double Q Learning 03:39:56 Conclusion 03:54:07
This is a great video if you already understand the topic, understand the code and just want a guy saying what he's typing out aloud, kinda explaining bits and pieces here and there.
yah and people in the comment section be like: thank you, what a great tutorial for free, great explanation, while they get nothing and just being smart in typing a comment
He said at the beginning no need to know about this and that. 14 minutes into the video he is typing the line 123. Honestly why didn't he copy and paste it? :))))
Anyone interested in learning the terminologies of what he is talking about should go check out the video lectures Stanford did on MDPs(Markov decisions processes and RL), they're about each an hour long and do go in depth behind the math for a lot of this stuff. Cheers!!!
One minor correction for those watching at 1:19:12 and trying to follow along (like myself): on line 77 after the "else", the "memStart = int(np.random.choice(range(self.memCntr - batch_size - 1)))" should actually be "memStart = int(np.random.choice(range(self.memSize - batch_size - 1)))". The self.memSize is needed here instead of self.memCntr because at this point the self.memory list is now full (the "else" branch), but the self.memCntr value is continuing to grow and is now larger than the max self.memory size. That leads to line 78 giving miniBatch an empty list, [ ], leading to memory being an empty array, because memStart will be a larger value than the self.memory list length, while then being used as the index for grabbing the miniBatch from that same self.memory list -- no good. Ultimately that leads to an exception: "too many indices for array" on line 81 (since we are trying to forward an empty 1D numpy array and call 2D indices that don't exist). With self.memSize for line 77, that no longer happens, and memStart stays within the bounds of the self.memory length/size. With that, everything works, and you can watch the agent play :)
The nerd talk and keyboard typing is very ASMR and it helps me sleep. Put a mic at the keyboard itself and edit it in, this is wonderful and i might actually wake up smarter in the morning
I'm immersed in this. I read a book with a similar theme, and I was completely immersed. "The Art of Saying No: Mastering Boundaries for a Fulfilling Life" by Samuel Dawn
The length of the flatten outputlayer can actually be calculated from first conv layer tracing the data through the network. Just use the function: ((dimension length - kernal size for the dimension + 2*padding)/stride)+1 = output length for the dimension do this for each dimension for each conv layer and multiply by number of outputs in the end to find the length of the flat dimension as such: 1st conv layer: ((185 - 8 + 2*1)/4) + 1 = 44 (acutally 44.75 but you always round down, since there are no 0.75 pixels) ((95 - 8 + 2*1)/ 4) + 1 = 22 (rounded down from 22.25) 2nd conv: ((44 - 4 + 2*0)/2) + 1 = 21 ((22 - 4 + 2*0)/2) + 1 = 10 3rd conv: ((21 - 3 + 2*0)/1) + 1 = 19 ((10 - 3 + 2*0)/1) + 1 = 8 this means the 3rd layer outputs 128 frames with each having dimensions 19*8 and therefore if you wanted to flatten them into one you would get one dimension with 128*19*8 vectors. Just neat little trick for those who want it
⭐️ Course Contents ⭐️ ⌨️ (00:00:00) Intro ⌨️ (00:01:30) Intro to Deep Q Learning ⌨️ (00:08:56) How to Code Deep Q Learning in Tensorflow ⌨️ (00:52:03) Deep Q Learning with Pytorch Part 1: The Q Network ⌨️ (01:06:21) Deep Q Learning with Pytorch part 2: Coding the Agent ⌨️ (01:28:54) Deep Q Learning with Pytorch part ⌨️ (01:46:39) Intro to Policy Gradients 3: Coding the main loop ⌨️ (01:55:01) How to Beat Lunar Lander with Policy Gradients ⌨️ (02:21:32) How to Beat Space Invaders with Policy Gradients ⌨️ (02:34:41) How to Create Your Own Reinforcement Learning Environment Part 1 ⌨️ (02:55:39) How to Create Your Own Reinforcement Learning Environment Part 2 ⌨️ (03:08:20) Fundamentals of Reinforcement Learning ⌨️ (03:17:09) Markov Decision Processes ⌨️ (03:23:02) The Explore Exploit Dilemma ⌨️ (03:29:19) Reinforcement Learning in the Open AI Gym: SARSA ⌨️ (03:39:56) Reinforcement Learning in the Open AI Gym: Double Q Learning ⌨️ (03:54:07) Conclusion
for anyone watching, inheriting from object is implied, you haven't needed to type that since even the oldest versions of python 3, save yourself some time ;) `class foo(object):` is exactly the same as `class foo:` the reason he types it here is probably for intercompatability between py2 and py3, but not even a year after this was uploaded py2 went end of life, so you shouldn't need to worry about that anymore :))
When I wanted to implement a multi agent reinforcement learning envirenment for a soccer game (multiple agents, separately trained, with separate models) what algorithms would I use for a continious envirenment (not a grid world) where the players can walk/run/shot everywhere? - DQN Reinforcement Learning?
@@underlecht Because a non-symmetric kernel (even number) yields a non-symmetric filter response. In the example above, this non-symmetry leads to a shift of the blurred image by half a pixel.
I am currently creating an agent-based model that will generate x numver of agents. Each agent has a step function. I would LOVE to incorporate this reinforced learning method into the model. How would you adjust it from taking a visual frame like from a game to using only the global environment variables? Is it simply as easy as swapping out one for the other?
How comes that I am not capable to UNDERSTAND this stuff? Is it OK to accept that I am dumb? Serious question... (I am a software engineer, did computer science, but had always problem with math, but did always the best mark when I had to deliver a project..)
Can I really get a job after learning stuff like machine learning and etc online? I did BSc in EEE but as I was not interested I did no do well and did not learn much, only thing I somewhat enjoyed was the C and C++ courses and digital logic design and verilog/vhdl courses . Later I did python course from MITx and enjoyed solving all the exercises. Pls give me hope.
Hello Phil, I think there is another mistake in the code, in the learn function it should be reward_batch + gamma*np.max(Q_next, axis=1)*(1-terminal_batch) instead of just terminal_batch. Since we are passing int(done) as a stored observation. Therefore for done=False, int(done)=0 and vice versa. And if episode does not end that is done equals False then we need to add the next Q_value otherwise we only add reward. What do you think? Am I correct?
yeah i do think so. but i encourage you to try it both ways. sometime it trains the agent to finish the episode as fast as possible, that's not what we want.
새해 첫날 듣는 노래가 그 해 운이 달렸다는거 몰라서 2023년 첫곡을 니소식으로 들었다가 이틀만에 사귀던 남친이랑 헤어졌었는데 2024년은 내가제일잘나가 듣고 완전 끝내주게 살아보려고! 1년내내 연애도 끝났고 직장도 안좋게되서 속상했는데 겨우 [빠]칭코닷[컴]하면서 잊고지내고 우울한거도 잊으면서 지냈는데 2024년에는 더 잘될 수 있을것같아! 이제 돈도 잘 벌고 내가 더 행복한 사람되야지!
self.q = tf.reduce_sum(tf.multiply(self.Q_values, self.actions)) Why are you doing this? I fail to understand the meaning of this line?? Thank you in advance :)
so it seems I can't even install this or get started, I'm on windows 10 and I have python 3.7 working, I installed pip, gym, but when I got to tensorflow it's telling me I have "no matching distribution found for tensorflow-gpu", some people suggested that it's because I got the latest version of py, others suggest to use anaconda, what should I do?
so after a few hours I was able to figure out that anaconda and miniconda are the same thing and going through the github repo for box2d-py it just says to use anaconda, but I'd rather follow how you're doing this vs using a stupid VM with py on it, seems really ridiculous that installing a few things you show in your command line running so fluidly, like what am I missing? I reinstalled python 4 times and made sure my env variables were working correctly, each with a different python version each time and they all gave the same error with pip installing box2d-py and tensorflow-gpu, including 3.6 which is the version you're on. What shell are you using? I'm just using command prompt and I wonder if that's the problem.
@@jordanolson11 Sorry, just seeing this now. To the best of my knowledge, Tensorflow only works with python 3.6. You can just install 3.6 in parallel with other versions, without nuking the whole install, I believe.
Amazing course, thanks alot Phil! One question, you were comparing policy gradient methods with reinforcement learning however after few searches it seems like policy gradient method is an algorithm within RL. Could you clarify?
Not useful. Creator should at least run a code once before wrapping up a section. Small syntax/logical errors are fine but when there is error in code itself (eg: new_state_batch in first section) and there isn't much explanation about it, whole hour spent in coding goes for nothing.
When you mentioned a course on Full Machine Learning Tutorial - Reinforcement Learning and there is no proper order to it. I don't recommend watching this thing. There are tons of materials that are a lot simpler than this.
Just need to get this over. After I find her, That's it. I will share the technique. She must code it. Then its done. I will go to desert. If your thinking nuclear reactor? In the middle of the congested city in the world with rascal scientist like me . It will result to catastrophe.