Тёмный

Build a Chrome Dino Game AI Model with Python | AI Learns to Play Dino Game 

Nicholas Renotte
Подписаться 260 тыс.
Просмотров 34 тыс.
50% 1

Learn to build a custom reinforcement learning model for gaming...in this case for web games like the Google Chrome Dino game with Python and Deep Learning.
Chapters
0:00 - START
0:42 - Explainer
1:23 - PART 1: Install and Import Dependencies
1:58 - Breakdown Board
20:18 - PART 2: Create a Custom Game Environment
1:16:00 - PART 3: Train the AI Model
1:31:57 - PART 4: Get the Model to Play the Chrome Dino Game
1:41:28 - Ending
Oh, and don't forget to connect with me!
LinkedIn: bit.ly/324Epgo
Facebook: bit.ly/3mB1sZD
GitHub: bit.ly/3mDJllD
Patreon: bit.ly/2OCn3UW
Join the Discussion on Discord: bit.ly/3dQiZsV
Happy coding!
Nick
P.s. Let me know how you go and drop a comment if you need a hand!
#deeplearning #python #ai #gaming

Наука

Опубликовано:

 

22 май 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 117   
@aymanaslam7267
@aymanaslam7267 Год назад
Thanks for listening to our feedback Nick! I think you should mention that you're building a custom RL environment in this tutorial, since a lot of people who have wanted a tutorial on this might miss this. Thanks again for all the content!
@NicholasRenotte
@NicholasRenotte Год назад
Yeah, I need to call it out more!
@Hassibayub
@Hassibayub Год назад
Great Tutorial. Definitive explanations and presentation. Nick consistently exceeds expectations by providing top-class tutorials. Thanks a lot, NICK. 👍❤
@NicholasRenotte
@NicholasRenotte Год назад
Thanks so much @Muhammad!!
@vialomur__vialomur5682
@vialomur__vialomur5682 Год назад
Thanks a lot I always wanted a custom environment :)
@fustigate8933
@fustigate8933 Год назад
Been waiting for this one!🔥
@NicholasRenotte
@NicholasRenotte Год назад
🙌
@SimonLam2024
@SimonLam2024 Год назад
As always, thank you for the great tutorial.
@NicholasRenotte
@NicholasRenotte Год назад
Thanks for checking it out Simon!!
@thewatersavior
@thewatersavior Год назад
Just awesome, thank you!
@peralser
@peralser Год назад
Nick..you alwas do the best!!! Thanks.!!
@ai.egoizm2.059
@ai.egoizm2.059 Год назад
Cool! That's what I need. Thanks!
@Thomas-uq6gl
@Thomas-uq6gl Год назад
Hey man, i really enjoyed this video. I would be interested in some multi agent RL next, where models play in the same environment at the same time against or with each other
@NicholasRenotte
@NicholasRenotte Год назад
Definitely, going to need to use a different framework for that I don't think SB3 has support for multi agents yet. I took at look at the Unity SDK and apparently it can handle it. Might take a deeper dive when I'm back.
@Quantiflyer
@Quantiflyer 9 месяцев назад
I know i am very late to reply but with the unity sdk, it is incredibly easy to setup, and it doesnt need any extra modifications to work with multiple agents. Compared to unity sdk, open ai gym is very difficult. The only downside of unity sdk is (of course) it only works with unity@@NicholasRenotte
@gonzalobaezcamargo2210
@gonzalobaezcamargo2210 Год назад
Great stuff again! You are creating so much content that I'm struggling to keep up, but please keep going!
@meetvardoriya2550
@meetvardoriya2550 Год назад
We use to see this javascript hack of dino bot, and here's nick with automating it with its custom logic using RL 😍, always been a fan of your content nick!, and congrats for your 80k subs 💥💯
@NicholasRenotte
@NicholasRenotte Год назад
Hahaha, someone told me you can remove the die line in the js. Now I'm like I should've just done that LOL
@omarismail4734
@omarismail4734 Год назад
Thank you, Nick, for this video! May I ask you what a screencast recorder are you using for your videos? It seems that you can zoom and pan while you are screencasting. Or is this a post-production video?
@NicholasRenotte
@NicholasRenotte Год назад
It’s live Omar, I use a apple trackpad to zoom! I show the setup in the battle station breakdown!
@prasenlonikar9753
@prasenlonikar9753 Год назад
Thanks for creating this video. Can you please create a video on evolutionary AI to play a dino game ??
@nikvod1330
@nikvod1330 Год назад
Hey Nick! Love you ^_^ That is so cool What's next? What about a game in which you need to control the mouse and buttons?
@NicholasRenotte
@NicholasRenotte Год назад
Hmmm, yeah that would be cool!!
@Quantiflyer
@Quantiflyer 9 месяцев назад
use keyboard and pyautogui module for that
@brandencastle3526
@brandencastle3526 5 месяцев назад
is there any way to use an image to signal a reset? here we used a "game over" text, but I wanted to see if you could use an image instead.
@captainlennyjapan27
@captainlennyjapan27 Год назад
Hello Nicholas! As a part of my Master's project, I want to compare two human voice audio files. I want to compare two spectrograms and look for similarities and differences. Do you have any advice on where to get started with this? Of course, I'll be doing my own research but I thought I'd ask for my favorite data science RU-vidr for his wisdom! ;)
@frankz61
@frankz61 Год назад
Changing the "cv2.waitKey(1)" to "cv2.waitKey(0)" will fix the render() function freezing issue.
@ashkankiafard8566
@ashkankiafard8566 Год назад
Hi Nick! Thank you so much, this was exactly what I was looking for! One question though: How can I know which algorithm and policy from stable baselines should I use for each different game? Will I understand if I just read the docs? Keep up the great work! Love your tutorials!
@NicholasRenotte
@NicholasRenotte Год назад
Yep go through the docs and read papers! Normally I'll try out a range of algos, for optimization I'll use Optuna!
@ashkankiafard8566
@ashkankiafard8566 Год назад
@@NicholasRenotte Thanks a lot!
@jumbopopcorn8979
@jumbopopcorn8979 Год назад
Hello! I tried doing this with geometry dash, a similar game where you have to jump over obstacles. I trained it for 100,000 steps, and it's about as bad as pressing random buttons. The get_observation() function had enough information to see the objects to jump over, but it feels like my model didn't learn anything. After looking a little further, I found that the reward system was not related to time at all. It felt like it was picking random numbers, but I used the same one as you. Any help would be appreciated!
@Alex-ln7ds
@Alex-ln7ds Год назад
Hi Nicholas, thanks for the content (amazing as usual 😄). What pc specs do you have? Like gpu/cpu/ram
@NicholasRenotte
@NicholasRenotte Год назад
Cheers @Alex, it's a Ryzen 7 3700x, 2070 Super and 32 GB of DDR4 (I think). Stay tuned for Sunday's vid, I'm going to do a deep dive into my hardware!
@Alex-ln7ds
@Alex-ln7ds Год назад
@@NicholasRenotte Yeah, watched the video already! Thanks for answering. And additional thanks for the content, it is seen how much effort you put in ur videos! 🙏
@zaplavs6944
@zaplavs6944 Год назад
Nicholas Rennote, hi. Thank you so much for this video, I've learned a lot. If you're interested, I've improved your code a bit. 1. I removed the detection of the end of the game by letters (since it took away too many fps). And make sure that the detection of the end of the game was by changing the color (number) in a certain place. And my fps has grown to 25. 2. I increased the view of the dinosaur because he does not have time to catch the moment of the jump. 3. I changed the reward: -1 for any action and +2 for inaction. thanks to this , I got rid of random actions and got more deliberate actions . (translated from Russian into English, mistakes are possible)
@buzzchop5520
@buzzchop5520 Год назад
Yo, im just getting started on working on this NICE
@raadonyt
@raadonyt 11 месяцев назад
Can you please share that reward code of yours????
@tetragrammat0n
@tetragrammat0n 19 дней назад
I have applied this changes but the FPS remains 1
@mdiianni
@mdiianni Год назад
Is it possible combine the training steps with actual human interactions/training? Imagine a common scenario where a teacher provides some instructions, the alumns follow them to have a first hint, but with a few more lessons you can see most students are getting the basics right and from there it's just a matter of train hard (lots of epocs or callbacks starting from 88000 and not from 0 if that makes sense?
@m12772m
@m12772m Год назад
Bro please go for disco diffusion! Need a mode for that! If you can make multi initial image with multi prompts can maintain google sheet … for the moment its one image with one prompt ….that would be awesome
@arjunprasaath5538
@arjunprasaath5538 Год назад
Hey Nick, awesome work! A quick question, you got a FPS value of 2 which is very less any thoughts or ideas on improving it?
@Froparadu
@Froparadu Год назад
Hey Arjun. I know this might be late. But I started my RL journey last week and have watched Sentdex's RL series (SB3). I haven't completed watching this video but in the Sentdex video, he was rendering the UI of Lunar Lander to show the training process visually. This drastically reduced the speed of training considering the UI had to render. If the training process was headless (without visuals), it would've sped up the training. Just extrapolating that theory to this video, I see that the game can't be run headless unless you have the internal code of the game in your custom environment. I assume Nicholas is capturing the frames and feeding it to the DQN network. There are bunch of other factors that may affect the training process as well (and the reason I stated might not be true in Nicholas' case). Hope this helps! EDIT: I saw that Nicholas is using pytesseract to predict whether the game is over and indicating that episode is "done". That seems to me like a very expensive operation since the get_done() is going to run every frame to check whether the game is over. Maybe devising another way to check that will drastically speed up the training process.
@arjunprasaath5538
@arjunprasaath5538 Год назад
@@Froparadu thanks a lot man, your feedback helps me validate my thought process.
@xiaojinyusaudiobookswebnov4951
@xiaojinyusaudiobookswebnov4951 10 месяцев назад
@@Froparadu That makes a lot of sense, thank you
@gumbo64
@gumbo64 Год назад
This is timed perfectly for me, in my project I'm using python selenium to play flash games (ruffle) which should keep all the inputs, screenshots etc contained. Was wondering would pydirectinput be faster because speed is of course very important for training. Anyway though this vid will help a lot with the environment and image processing and everything so thank you!
@NicholasRenotte
@NicholasRenotte Год назад
It's meant to be faster but for whatever reason I couldn't get the gym environment past 2 FPS. I tried a ton of optimization but couldn't find the specific reason why, I'm going to dig into it more if I ever do more games.
@wiegehts6539
@wiegehts6539 Год назад
@@NicholasRenotte please find it out
@neo564
@neo564 Год назад
Can you make a video about how we can retrain our previous model
@jacobmargraf4564
@jacobmargraf4564 Год назад
Great video! Is there a way to continue training starting from my latest model or do I have to start the training all over again if I want it to learn more?
@MAAZ_Music
@MAAZ_Music Год назад
Usse latest model
@lorenzoiotti
@lorenzoiotti Год назад
Hi, when i try to load a model back it works properly but if i try to resume training from the saved model it starts training from scratch, do anyone know what I am doing wrong?
@PedroHenrique-ci8fy
@PedroHenrique-ci8fy 9 месяцев назад
Anyone else receiving the error "tuple has no attribute "shape" when trying to make the model play from the 88k steps we have available in the GitHub repository?
@luklucky9516
@luklucky9516 Год назад
Hi i really like your content but i suck at programming mainly because i started like last year and dont have a course or something like that to follow. Do you have a suggestion for me as beginner to learn reinforcement learning? Because i start loosing track of what the code that you write does really quickly
@TheRamsey582
@TheRamsey582 10 месяцев назад
pydirectinput only works on windows, will this work on mac with pyautogui? I did not attempt it yet, but I am planning to
@gggffffeee
@gggffffeee 7 месяцев назад
48:31 is it okay to compare number of black pixels ? instead of full spelling ?
@novis1177
@novis1177 Год назад
Nice video! But have you ever consider building the agent on your own, not just using stablebaseline? Cause I think stablebaseline is quite limited.
@NicholasRenotte
@NicholasRenotte Год назад
I have but it's quite involved and would probably end up being a monster tutorial. Haven't had the time to dedicate to it yet, but maybe soon!
@solosoul2041
@solosoul2041 Год назад
nich! how can i get model summary of a .pth file?
@brhoom.h
@brhoom.h 4 месяца назад
Hello, thank you so much for this video, but I'm wondering if I cloud download you 88k trsined model and I train it again so it reach to 20k or more? can I do it or not? and how?
@SaiyanRGB
@SaiyanRGB 2 месяца назад
Can this be applied on 3d games in unreal engine ?
@ozzy1987mr
@ozzy1987mr Год назад
no consigo buen material sobre este tema mas detalladamente en español.... me ayudan bastantes tus videos
@SalahDev-wz8ob
@SalahDev-wz8ob Год назад
yoo Nick well done , i was wondoring how could you solve the i fps issue ?, Thank you dude.
@toppaine4008
@toppaine4008 9 месяцев назад
Hi, I have a question about the get_observation function, why do you want to resize how does it affect our code(i'm a beginner so this might seem like a dumb question)
@gr33nben40
@gr33nben40 Месяц назад
I think to minimize the size of the data captured, the smaller the data the faster it'll run I think
@affanrizwan3672
@affanrizwan3672 Год назад
Hey Nick i am having trouble downloading your model the system is saying virus detected
@sanjoggaihre4178
@sanjoggaihre4178 Месяц назад
While running DQN i got an error Anaconda\Lib\site-packages\torch\_dynamo\config.py:279, in is_fbcode() 278 def is_fbcode(): -->279 return not hasattr(torch.version, "git_version") AttributeError: module 'torch' has no attribute 'version'. How can i resolve it ?
@sbedi5702
@sbedi5702 Год назад
Hey, this is a great video! I have a question when installing pydirectinput I get this error on my Mac "AttributeError: module 'ctypes' has no attribute 'windll'" What should I do?
@neo564
@neo564 Год назад
it might be that pydirectinput is only for windows
@hunterstewart3172
@hunterstewart3172 Год назад
You can use pyautogui
@samibenhssan3121
@samibenhssan3121 Год назад
You are a marvellous guy ! But i think it is time to leave a bit ML field into data engineering /mlops
@NicholasRenotte
@NicholasRenotte Год назад
Cheers, yeah we'll probably get into some of that stuff later this year!
@pushpakbakal4457
@pushpakbakal4457 Год назад
Heyy nicholas, I'm stuck at initial stage of installation of pydirectinput After that I'm getting constant error as follow:- module 'ctypes' has no attribute 'windll' Please go through this if possible
@gmancz
@gmancz 9 месяцев назад
If you use Mac or unix-like systems, you should use PyAutoGUI instead.
@jinxionglu5008
@jinxionglu5008 Год назад
great content!
@YasinShafiei86
@YasinShafiei86 Год назад
Can you please make a Video Classification tutorial?
@sunidhigarg673
@sunidhigarg673 Год назад
please do more stuff like this. us there any discord channel?
@GamingExpert0321
@GamingExpert0321 Год назад
Hey, I have a quick question, really hoping for your reply Because you are using reinforcement learning, What I think will happen is, no matter how many times we train it (even trillion times) it will still have a quite high probability of game over on the first few cactus because cactus positions are random and ai don't care about cactus position, it only care about at what time he jumped in past and at what time will have to jump this time to maximize the reward but the cactus are random then i think it will never be perfect am i wrong? Please answer
@abhinavtiwari8481
@abhinavtiwari8481 10 месяцев назад
yes you are wrong here, because it does not seed that when and where it jumped in the past, it sees what the situation around it was when it jumped and got rewarded (the whole purpose of using "screen shots" here is that only") so that when AI encounters the same situation again it will jump, instead of recording the time and place when to jump. Because that would just be recording and not learning
@giochelavaipiatti8053
@giochelavaipiatti8053 Год назад
Great tutorial. I just have a question. After about 30000 trainings I still see no progress in how the AI plays, is it normal or there may be a bug in my code ?
@NicholasRenotte
@NicholasRenotte Год назад
Nope, that's normal. Keep training! It takes a while, you could also try dropping the training starts at parameter so it starts training the DQN from the get go!
@giochelavaipiatti8053
@giochelavaipiatti8053 Год назад
@@NicholasRenotte Thank you !
@captainlennyjapan27
@captainlennyjapan27 Год назад
It’s here!!!!!
@NicholasRenotte
@NicholasRenotte Год назад
IT ISSS!! Thanks for checking it out Leonard!
@NoahRyu
@NoahRyu Год назад
How can I add negative reward functions? I tried searching all over the internet but can't find any similar to this code...
@NoahRyu
@NoahRyu Год назад
Nevermind, I found out how to do it.
@FatDwarfMan
@FatDwarfMan 8 месяцев назад
can someone help me with jupiter? like how do i set everthing up
@skaiyeung7183
@skaiyeung7183 Год назад
nice
@dr.mikeybee
@dr.mikeybee Год назад
This looks like a fun one, but it only runs on Windows. I think if you'd used pyautogui instead, this would also run on Linux and macOS.
@NicholasRenotte
@NicholasRenotte Год назад
Ohhhh, didn't realise direct input would cause issues, should be easy enough to swap out for pyautogui, thanks for the heads up @Michael!
@MatheusMorett
@MatheusMorett 11 месяцев назад
thanks!!!!
@JacobSean-iy3tl
@JacobSean-iy3tl Месяц назад
hey , I would like to see you try this with Fifa 😁
@musa_b
@musa_b Год назад
Hey nick, is it possible to use tensorflow for RL.
@NicholasRenotte
@NicholasRenotte Год назад
It is, might eventually do it as part of my DL basics/from scratch series (e.g. face detector, iris tracking etc)
@musa_b
@musa_b Год назад
that would be really great, brother!!
@musa_b
@musa_b Год назад
thaks for always helping us!!!
@raadonyt
@raadonyt 11 месяцев назад
Did you delete the file from github???? I have trained my model to 60,0000 but there's literally no progress at all. Can to please share the source code, I have to present it tomorrow.
@aankitdas6566
@aankitdas6566 Год назад
getting module 'tensorflow' has no attribute 'io' error while trying to run model. Any fixes anyone? (i have tensorboard installed)
@NicholasRenotte
@NicholasRenotte Год назад
Might be a tensorboard issue, try running through this: stackoverflow.com/questions/60146023/attributeerror-module-tensorflow-has-no-attribute-io
@erfanbayat3974
@erfanbayat3974 Год назад
you are the GOAT
@sspmetal
@sspmetal Год назад
I would like to create an Ai that plays with binary options. Do you have any suggestions ?
@rlhugh
@rlhugh Год назад
Hi. What do you mean by 'binary options'?
@sspmetal
@sspmetal Год назад
@@rlhugh i mean binary options trading. It's like forex but you have to predict the direction and the expiry time
@BodyBard
@BodyBard 3 месяца назад
Hey, did it work out for you? I also made some good agnets on futures and would like to share experience
@dewapramana3859
@dewapramana3859 Год назад
Nice
@owolabitunjow9041
@owolabitunjow9041 Год назад
nice video
@wriverapaniagua
@wriverapaniagua Год назад
excelente!!!!!
@seandepagnier
@seandepagnier Год назад
I am disappointed with the results. I think a single IF statement on a particular pixel of the image would give far better performance.
@ApexArtistX
@ApexArtistX 7 месяцев назад
and DXCAM faster than MSS
@fruitpnchsmuraiG
@fruitpnchsmuraiG 6 месяцев назад
hey, im getting a lot of errors since gym has now been shifted to gymnasium how to fix that, did ypu get the code running?
@ApexArtistX
@ApexArtistX 6 месяцев назад
@@fruitpnchsmuraiGwhat error message
@ApexArtistX
@ApexArtistX 6 месяцев назад
@@fruitpnchsmuraiGgym returns 4 gymnasium returns 5 if I remember so u need some changes in ur code
@sunidhigarg673
@sunidhigarg673 Год назад
is*
@jasonreviews
@jasonreviews Год назад
it's easier than that. Just remove the die feature with javascript. Don't need AI. lols.
@NicholasRenotte
@NicholasRenotte Год назад
😂 damn, I didn't even think of that!
@musa_b
@musa_b Год назад
Hahh🥴🥴! This was left
@NicholasRenotte
@NicholasRenotte Год назад
🙏🙌🙏
@JurgenAlan
@JurgenAlan Год назад
Please check your LinkedIn
Далее
ЖДУЛИ | 2 СЕЗОН | 2 ВЫПУСК
53:21
Просмотров 458 тыс.
Machine Learning Projects You NEVER Knew Existed
15:20
Просмотров 152 тыс.
I tried to make a Valorant AI using computer vision
19:23
AI Learns to Walk (deep reinforcement learning)
8:40
AI Learns to Play MORTAL KOMBAT
16:50
Просмотров 145 тыс.
😱НОУТБУК СОСЕДКИ😱
0:30
Просмотров 1,7 млн
Индуктивность и дроссель.
1:00