Тёмный

ML-Agents 1.0+ | Create your own A.I. | Full Walkthrough | Unity3D 

Sebastian Schuchmann
Подписаться 10 тыс.
Просмотров 65 тыс.
50% 1

A fast-paced, complete walkthrough tutorial on how to train an A.I. using ML-Agents 1.0 in Unity3D. This is a custom example I have created just for this video.
Timestamps:
00:14 Cloning the Repository
01:00 Tutorial starts
01:50 Explaining Actions of Agent
02:37 Explaining Observations of Agent
05:02 Creating Logic for Training
11:30 Training the A.I.
Need Help? / discord
Support me on Patreon: www.patreon.com/user?u=25285137
Twitter: / sebastianschuc7
Links:
Link to Repository: github.com/Sebastian-Schuchma...
Link to Repository with finished scripts (Only recommended after doing the tutorial yourself!): github.com/Sebastian-Schuchma...
Commands:
These commands a dependent on your OS and where the files are located. Make sure to read carefully and adjust accordingly.
CD into Folder where Repo is located: cd {FolderContainingRepo}
CD into Repo: cd A.I.\ from\ Scratch\ -\ ML\ Agents\ Example
CD into TrainerConfig: cd TrainerConfig
First Run: mlagents-learn trainer_config.yaml --run-id="JumperAI_1"
Open Tensorboard: tensorboard --logdir=summaries
Second Run: mlagents-learn trainer_config.yaml --run-id="JumperAI_2"
Last Run: mlagents-learn trainer_config.yaml --run-id=JumperAI_3 --env=../Build/build.app --time-scale=10 --quality-level=0 --width=512 --height=512

Наука

Опубликовано:

 

5 июл 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 141   
@SebastianSchuchmannAI
@SebastianSchuchmannAI 3 года назад
Hello everybody! I have created a Discord Channel for everybody wanting to learn ML-Agents. It's a place where we can help each other out, ask questions, share ideas, and so on. You can join here: discord.gg/wDPWsQT
@juniorjason583
@juniorjason583 2 года назад
you all prolly dont care but does someone know a method to log back into an instagram account..? I somehow forgot my account password. I appreciate any tricks you can offer me!
@oakleybraylen7093
@oakleybraylen7093 2 года назад
@Junior Jason Instablaster =)
@juniorjason583
@juniorjason583 2 года назад
@Oakley Braylen i really appreciate your reply. I got to the site on google and Im trying it out now. I see it takes quite some time so I will get back to you later with my results.
@juniorjason583
@juniorjason583 2 года назад
@Oakley Braylen It worked and I finally got access to my account again. I'm so happy! Thanks so much, you saved my account !
@oakleybraylen7093
@oakleybraylen7093 2 года назад
@Junior Jason glad I could help :)
@OProgramadorReal
@OProgramadorReal 3 года назад
What an incredible job, man. Thank you so much for this video and repository!
@sovo94
@sovo94 3 года назад
This is one of the best tutorials I have ever come across. Thank you for the hard work.
@MotoerevoKlassikVespa
@MotoerevoKlassikVespa 4 года назад
Its amazing to see how much effort you put in your videos. Keep it up 👍🏻
@Diego0wnz
@Diego0wnz 4 года назад
This is exactly what I need! I've been trying to make a machine learning project for months but all the other tutorials were outdated or not complete. Finally someone who explains the concepts besides the preset tutorials from unity itself. Thanks so much man! PS: I still have a few questions, would you mind helping me over mail/voice chat since I think youtube comments is a bit messy for that?
@mateusgoncalvesmachado1361
@mateusgoncalvesmachado1361 4 года назад
Awesome Video Sebastian!! I was struggling really hard to find any runnable example for it!
@ximecreature
@ximecreature 4 года назад
Extremely good video ! Exactly what I was looking for. Keep it up, this is great work !
@augustolf
@augustolf 4 года назад
Congratulations Sebastian, excellent video! I'm waiting for another video showing more about hyper parameters.
@dannydechesseo1322
@dannydechesseo1322 4 года назад
Thank you so so so much I was trying to find tutorials on this!
@kyleme9697
@kyleme9697 4 года назад
Its going to be a hell of a ride ... keep it up Sebastian !!
@marianawerneck1198
@marianawerneck1198 4 года назад
Man, you're saving my life with this videos! You have no idea! Please keep up the great work!
@SebastianSchuchmannAI
@SebastianSchuchmannAI 3 года назад
Thank you! :)
@ioda006
@ioda006 3 года назад
This is so cool. I'm learning ML separately from games, but this is making want to try Unity!
@adrianfiedler3520
@adrianfiedler3520 2 года назад
Very well done video. So much effort was put into this.
@robertbalassan
@robertbalassan 2 года назад
thank you for this ml agents playlist brother. this is what i have been desperately looking for. it's not like i don't understand the documentation, but i am too lazy to read it unless there is an unsolvable problem to solve. so i would rather see someone go thorough the full process as you did in this playlist.
@harrivayrynen
@harrivayrynen 4 года назад
Thank you. We really need 1.x ML-Agents stuff. Older stuff is often too old for current version and also very hard to use today’s releases.
@mahditarabelsi2535
@mahditarabelsi2535 3 года назад
What an incredible job, Keep Up Man .
@kunaljadhav7880
@kunaljadhav7880 3 года назад
Keep up the good work Sebastian!
@santiagoyeomans
@santiagoyeomans 4 года назад
The Channel I was looking for so long... Like, subscribed and notifications turn on!!
@owengillett5806
@owengillett5806 4 года назад
Great stuff. Nice work !
@topipihko611
@topipihko611 4 года назад
Excellent series! Chilled pace and you explain all the details clearly. + Puns (y) Have you tried creating agent, which has continuous and discrete actions? Eg. Angle of cannon + shooting-action. Would be great topic for the next episode! :)
@BenjaminK123
@BenjaminK123 3 года назад
Man this is real hard learning but very cool thank you for creating this video :)
@shivmahabeer9450
@shivmahabeer9450 4 года назад
dude! thank u. saving me for my honours project!!
@alexandrecolautoneto7374
@alexandrecolautoneto7374 4 года назад
Keep with the good work, it helps a lot
@Andy-rq6rq
@Andy-rq6rq 3 года назад
amazing production quality for a small youtuber
@tattwadarshiguru3507
@tattwadarshiguru3507 3 года назад
Dude you are awesome. May God bless you with great success.
@SebastianSchuchmannAI
@SebastianSchuchmannAI 3 года назад
Thank you! :)
@SebastianSchuchmannAI
@SebastianSchuchmannAI 4 года назад
Note for everybody watching: Two days ago ML-Agents Release 2 was released. Don't worry, the latest release just contained bug fixes, meaning you can still follow the tutorial without doing anything differently. The naming may be a bit confusing because Release 2 sounds like a big thing but it isn't, they just changed their naming scheme. I would always recommend using the latest release version! Enjoy! :)
@sayvillegames
@sayvillegames 4 года назад
Hi i have a question, so if i am working on a ml agent and idk where too put the shoot function in c# please help me.
@animeabsolute7130
@animeabsolute7130 4 года назад
thank you it useful
@electrocreative4303
@electrocreative4303 4 года назад
That's alot from you Thanks so much for explaining :)
@HamadAKSUMS
@HamadAKSUMS Месяц назад
WOW crazy man Thanks
@manzoorhussain5161
@manzoorhussain5161 2 года назад
Goodness me! this is what I was looking for.
@herbschilling2215
@herbschilling2215 3 года назад
Really helpful ! Thanks!
@FaVeritas
@FaVeritas 4 года назад
Thank you for this series. I've attempted to get started with ML in Unity about 3 times and gave up each time. (Mostly due to having to use Python along side Unity). I'm hoping to apply ML to bot players in my 2D shooter game!
@mahdicheikhrouhou2286
@mahdicheikhrouhou2286 3 года назад
Good Video ! Thanks it helped me a lot !
@WilliamThyer
@WilliamThyer 4 года назад
Great video!
@maloxi1472
@maloxi1472 4 года назад
Excellent ! Liked, subscribed, belled... you name it !
@miguelmascarenhas613
@miguelmascarenhas613 4 года назад
Awesome video
@TheStrokeForge
@TheStrokeForge 3 года назад
I love it!!!
@itaybroder2897
@itaybroder2897 3 года назад
Amazing vid
@jorgebarroso2496
@jorgebarroso2496 3 года назад
if instead of summaries your folder under Train config is called results: used this command to open the results in the tensorboard (access throgh localhost:6006): tensorboard --logdir results --port 6006
@hansdietrich83
@hansdietrich83 4 года назад
This is so awesome. Could you try to make a tutorial on an agent with legs learning to walk?
@bigedwerd
@bigedwerd 4 года назад
Inspiring stuff
@albertoferrer8803
@albertoferrer8803 4 года назад
Congratulations! how do you have not a lot of subscribers? Nice Video, I have a question? how do you think is the best way to detect with a sensor on my agent the nearest another agent and after attack? (Im making a symmetric agents in one environment like a battle royale game) I supposed to Sensors detect what is the best time to attack, but in the first time they attack, i trying to make a custom near sensor (but i think this is covered by the actual Ray Perception sensor 3D. What do you think this could be the best solution? Now I'm subscribed :p
@nischalada8108
@nischalada8108 4 года назад
This is insane
@JasonPesadelo
@JasonPesadelo 3 года назад
Hi Sebastian, do you know how continue a trainning the previous model saved? Thanks !
@jorgebarroso2496
@jorgebarroso2496 3 года назад
Is there a way to save a NN model when the agent makes a highscore? So the best are kept
@mrstal5238
@mrstal5238 3 года назад
Thank you
@user-ec1tk7ws9v
@user-ec1tk7ws9v 3 года назад
u cool. thanks for ur channel!
@BramOuwerkerk
@BramOuwerkerk 4 года назад
I have a question: where in all of this is the Raycast Sensor used?
@RqtiOfficial
@RqtiOfficial 3 года назад
I'm a little confused, there is no such thing as having my jumping script derive from Agent instead of MonoBehavior. I added the ML-Agents package with the package manager.
@berkertopaloglu911
@berkertopaloglu911 4 года назад
I did same way as you but when i wrok with multiple enviroments highscore stuck at 8 and never increase
@applepie7282
@applepie7282 3 года назад
thanks man
@carlosmosquera7946
@carlosmosquera7946 Год назад
I'm sure it is a stupid question but what's the difference with just using raycast and an if statement used to jump when the distance is close to the other car?
@ricciogiancarlo
@ricciogiancarlo 4 года назад
is it possible to give some basic knowledge to the ml-agents ? Like for example stopping at the red light to a traffic signal
@maxfun6797
@maxfun6797 Год назад
How many behavior parameters can we have, and how can we get the actions from these specific parameter components?
@jorgebarroso2496
@jorgebarroso2496 3 года назад
I tried some train configs and the one that fit best was the default one with 5.0e7 steps instead of 500000. Also, my AI just stopped jumping when reached 139, anyone knows why?
@Markste-in
@Markste-in 3 года назад
how do u move the code that easily? what is the key binding for moving a selection? need this xD thx!
@mohammadsoubra3532
@mohammadsoubra3532 3 года назад
When attempting this I keep getting "Couldn't connect to trainer on port 5004 using API version 1.0.0. Will perform inference instead." when playing it, I even started fresh by cloning your repo again and just hitting play without starting anything and it still gave me that error, Im not sure how to fix it
@lucaalfino2105
@lucaalfino2105 3 месяца назад
Hi Again! I am have made a hide and seek environment with a single seeker agent and a single hider agent. What I would like to do is to use the ray perception sensor 3d to give rewards depending if the hider is in the seekers view or not. What should I use for this, as the resources for the subject are rather scarce. Also, are the tags used in any way? for example if the seeker sensor hits the hider sensor(tag 2) then the seeker gains rewards and respectively the hider loses rewards
@lukewg
@lukewg 8 месяцев назад
When i add the ray perception sensor I cant see it at all in the scene view. Does anybody know how to fix this problem?
@ShinichiKudoQatnip
@ShinichiKudoQatnip 2 года назад
tensorboard summaries shows no data found?
@your_local_reptile6700
@your_local_reptile6700 4 года назад
where did the behavior parameter script come from :s
@keyhaven8151
@keyhaven8151 19 дней назад
I have always had a question about mlagents: they randomly select actions at the beginning of training. Can we incorporate human intervention into the training process of mlagents to make them train faster? Is there a corresponding method in mlagents? Looking forward to your answer.
@adamjurik5442
@adamjurik5442 4 года назад
Hey, I did exactly what you said to do in the tutorial but the decision requester script doesn't seem to work for some reason. The script is exactly the same I have checked and tried to do this multiple times. Although, great tutorial. EDIT: After a night's sleep I managed to try it out again and when training, everything worked. Weird but thanks :)
@Making_dragons
@Making_dragons 2 года назад
hey why did the decision requester just work out of the blue mine doesn't work.
@Armadous
@Armadous Год назад
Can you do a video on how to work with the curriculum feature?
@_indrahan
@_indrahan 4 года назад
Thank you so much for this informative video, I've added the request component but the car doesn't jump, does anyone know what's causing this?
@miguelmascarenhas613
@miguelmascarenhas613 4 года назад
I have the same issue. Were u able to solve it?
@Sebksermaph
@Sebksermaph 3 года назад
It does jump, you still have to press the space bar. Check to remove the decision requester and then press space bar when playing the game versus with decision requester ;). The AI has not yet learned to jump so even if he doesn't say it, he presses space bar!
@DeoIgnition
@DeoIgnition 4 года назад
I keep getting these two errors when i try to train: "File "g:\ml-agents\ml-agents\mlagents\trainers\trainer_controller.py", line 175, in _create_trainer_and_manager trainer = self.trainers[brain_name] KeyError: 'Jumper'" and "File "c:\program files\python37\lib\site-packages\tensorflow_core\python\summary\writer\writer.py", line 127, in add_summary for value in summary.value: AttributeError: 'str' object has no attribute 'value'"
@ygreaterr
@ygreaterr 4 года назад
that car is do be jumping doe
@ThiemenDoppenberg
@ThiemenDoppenberg 3 года назад
The AI car is just jumping around all the time and I cannot get it to just drive on the road and only jump when the raycast hits the cars :/
@user-sy2et4dl6y
@user-sy2et4dl6y 3 года назад
Hi. your video is so good! But I have some question. How can i find a location of application that we build?? I mean, can you teach me what "--env=../Build/build.app" mean??
@nan1512
@nan1512 3 года назад
빌드 하실때 빌드할 경로 유니티에서 지정할 수 있지 않나요?
@resmike4444
@resmike4444 2 года назад
can you make a video for mlagents 2.1.0??? there are some changes there!
@alexandrecolautoneto7374
@alexandrecolautoneto7374 4 года назад
Nicuruuuu
@ahmedahres530
@ahmedahres530 3 года назад
Hello, My agent does not jump during the training. Everything else is working properly, however it seems like it does not do anything except staying idle and receiving negative rewards. When I set actionsOut[0] = 1; in Heuristic() instead of actionsOut[0] = 0; the agent keeps jumping instead. It seems like the agent just makes the same decision over and over again. Anybody experienced the same? Thank you
@gradient4928
@gradient4928 3 года назад
Ran into this issue as well
@isaiahhizer8796
@isaiahhizer8796 3 года назад
Hallo, I get this error shown in the unity editor: "Couldn't connect to trainer on port 5004 using API version 1.0.0. Will perform inference instead." I run the mlagents-learn trainer_config.yaml --run-id="JumperAI_1" line, unity logo pops up, tells me to press play in editor, I press play, but because of this error above(I think), it doesn't train, and simply times out. I am using windows 10. I searched the internet for a solution, turned off windows defender, tried to do it in pyvenv, but it hasn't helped.
@darcking99
@darcking99 3 года назад
Same problem here
@alexsteed3091
@alexsteed3091 3 года назад
This error occurs when mlagents-learn is not running. It is basically saying, "no training available, (on port 5004) I will use inference (brain) instead".
@ashitmehta5000
@ashitmehta5000 3 года назад
Can someone tell how to start tensorboard in the latest update (Release 6)? I am going nuts over this.. Edit: run the following command instead: tensorboard --logdir results
@wiktor3453
@wiktor3453 3 года назад
tensorboard --logdir=results works for me
@realsoftgames7174
@realsoftgames7174 4 года назад
why is it your cars jump so smoothly, yet mine just continuously jump? ive trained for about an hour and same results highest score i got was 11
@berkertopaloglu911
@berkertopaloglu911 4 года назад
i got the same issue have you figured out yet?
@rizasandhi
@rizasandhi 3 года назад
same. Highest score I got was 10.
@chorusgamez755
@chorusgamez755 3 года назад
same!!! have you figured it out yet?
@realsoftgames7174
@realsoftgames7174 3 года назад
@@chorusgamez755 no i havent sorry, have you?
@protondeveloper
@protondeveloper 3 года назад
@@realsoftgames7174 No
@user-rl1ij5qo8r
@user-rl1ij5qo8r 3 года назад
some help mlagents stops at self._traceback = tf_stack.extract_stack()
@whoami82431
@whoami82431 3 года назад
For some reason, the player does not jump even after adding the Decision Requester. Not sure why this is happening. Any ideas? I have my project installed in the same folder where I have ML-Agents.
@ashitmehta5000
@ashitmehta5000 3 года назад
I ran into the same issue for HOURS... Then I noticed that I didn't change the branch size to 2. Try checking all the components values in each script.
@Sebksermaph
@Sebksermaph 3 года назад
It does jump, you still have to press the space bar. Check to remove the decision requester and then press space bar when playing the game versus with decision requester ;). The AI has not yet learned to jump so even if he doesn't say it, he presses space bar!
@DeJMan
@DeJMan 4 года назад
I wrote an extra rule(?) so that if the car jumps and it didnt jump over a car (unnecessary jump) then it loses 0.1 score. This made the car jump only when it needed to after training and not all the time.
@SebastianSchuchmannAI
@SebastianSchuchmannAI 4 года назад
Nice, great Idea!
@BramOuwerkerk
@BramOuwerkerk 4 года назад
I have a problem where it basically sets the reward to 0.1 instead of adding 0.1. Can anyone help me?
@SebastianSchuchmannAI
@SebastianSchuchmannAI 4 года назад
Did you use SetReward () instead of AddReward() ?
@BramOuwerkerk
@BramOuwerkerk 4 года назад
@@SebastianSchuchmannAI no, but I build a similar game like yours and there it worked
@etto4425
@etto4425 3 года назад
Hey Sebastian! I watched your tutorial and had loads of fun replicating it. The only problem is that my AI isn't learning! He just jumps over and over with no sign of improvement also after 34 minutes! Do you have any solution? Thanks!
@arthanant8634
@arthanant8634 Год назад
A little late to the party but did you find a solution?
@etto4425
@etto4425 Год назад
@@arthanant8634 i deleted unity a few months ago… guess you’re gonna have to keep looking 😅
@arthanant8634
@arthanant8634 Год назад
@@etto4425 damn okay lol
@ferdinandospagnolo7664
@ferdinandospagnolo7664 4 года назад
Why is it better to do the final training on the built version of unity rather than the editor?
@SebastianSchuchmannAI
@SebastianSchuchmannAI 4 года назад
It's simpy faster. The Editor has a lot of overhead
@chorusgamez755
@chorusgamez755 3 года назад
Hey guys, can anyone help me, when I run this the cars just jump endlessly, it's really annoying, they don't improve at all, if I run a unity example AI, such as the ball balancing one, it trains fine (also the rewards for my AI don't get printed to the command prompt) Edit: I fixed that. But it still isn't learning anything after 10 hours
@arthanant8634
@arthanant8634 Год назад
Hey I have the same problem. Did you find a solution for the AI not learning?
@Fuzzhead93
@Fuzzhead93 4 года назад
This is so cool! I'm curious how you could train different intelligences of AI for different difficulties and also curious about what the performance impact is of using ML agents in a shipped game, on mobile or pc
@thomasjardanedeoliveirabou3175
@thomasjardanedeoliveirabou3175 4 года назад
is anyone else getting a bugs with the file? my car is just FLYING, i had to adjust the mass and the highscore is HUGE,.
@SebastianSchuchmannAI
@SebastianSchuchmannAI 4 года назад
Hey, maybe my physics settings got lost or something. I set the gravity in the physics settings quite high, to around 80, if i recall correctly.
@Draco98
@Draco98 4 года назад
it fails to create process when i write "mlagents-learn trainer_config.yaml --run-id="JumperAI_1". can some one help
@SebastianSchuchmannAI
@SebastianSchuchmannAI 4 года назад
Remote debugging is always though, but I will try my best. First make sure you have python 3.6.1 or higher installed (Check via: "python --version"). Then make sure the ML-Agents Package is installed. Just put in the command: "mlagents-learn --help" to verify that it works. Next, make sure you are really located in the TrainerConfig directory, because the trainer_config.yaml file is located there. You can try the "dir" command on windows or "ls" on Mac/Linux and check if it prints "trainer_config.yaml". If nothing helps, I would advise you to try reinstalling python/mlagents again following the instructions here: github.com/Unity-Technologies/ml-agents/blob/master/docs/Installation.md Hope you will find a solution.
@KevinLeekley
@KevinLeekley 4 года назад
Hmm I think I am having the same issue, I'm running Python 3.6.8, mlagents-learn help command does work and I am definitely in the TrainerConfig dir, but I am getting error when I run that command: "Trainer configurations not found. Make sure your YAML file has a section for behaviors. mlagents.trainers.exception.TrainerConfigError: Trainer configurations not found. Make sure your YAML file has a section for behaviors. "
@SebastianSchuchmannAI
@SebastianSchuchmannAI 4 года назад
So on a new Windows Machine I had the same error and it took me some time to fix it. First I deinstalled all versions of python I had. Next I made sure to remove the python folder in Program Files. This one caused me a lot of trouble. Before reinstalling python make sure that when using the command "mlagents-learn" that it raises the "command not found" error. In my case even after deinstalling python it was stuck on "fails to create process" and only after removing the python folder in progam files it stopped doing that. Then I installed a fresh version of python 3.65, making sure to check the "Add to PATH" box when installing and then after doing "pip3 install mlagents" it worked. Hope this helps. Every machine is different.
@ElPataEPerro
@ElPataEPerro Год назад
😀
@DeJMan
@DeJMan 4 года назад
Everything worked except tensorboard. Just says no data.
@fetisistnalbant
@fetisistnalbant 3 года назад
Yeah. I can't see too
@jorgebarroso2496
@jorgebarroso2496 3 года назад
Same
@jastrone9505
@jastrone9505 3 года назад
when i change the script from monobehavior to Agent it says its a compiler error whats wrong?
@ThiemenDoppenberg
@ThiemenDoppenberg 3 года назад
Import MLAgents in the project via the package manager. You have to choose a version. I don't know what version is recommended for this tutorial but for the ML agents unity release 12 it is 1.7.2 I believe
@jastrone9505
@jastrone9505 3 года назад
@@ThiemenDoppenberg i have imported ml agents i can see the files in the project
@ThiemenDoppenberg
@ThiemenDoppenberg 3 года назад
@@jastrone9505 what version do you have of it?
@ThiemenDoppenberg
@ThiemenDoppenberg 3 года назад
@@jastrone9505 you should go to package manager. Then advanced tab -> preview packages. Then go to 'in project' and check the version. Put it on preview 1.7.2 and click update. This worked for me with this tutorial in 2019.4
@jastrone9505
@jastrone9505 3 года назад
@@ThiemenDoppenberg i tried it but it does still not work
@DeJMan
@DeJMan 4 года назад
I dont understand the purpose of the Heuristic function.
@SebastianSchuchmannAI
@SebastianSchuchmannAI 4 года назад
It is usally used for testing your agents via human input or some hard coded logic. Most of the classic ai in games is implemented in a heuristic fashion.
@mateusgoncalvesmachado1361
@mateusgoncalvesmachado1361 4 года назад
Sebastian, how can I create my own trainer? I have PyTorch models waiting to be used with Unity's ML-Agents haha :)
@lisastrazzella71
@lisastrazzella71 3 года назад
So here in this video you don't write the code you only explain it because I don't know if that is already the beginning or not:/ Many thanks in advanced:)
@Timotheeee1
@Timotheeee1 3 года назад
when I train the cars they just jump endlessly and never improve
@chorusgamez755
@chorusgamez755 3 года назад
same! did you figure it out?
@maxfun6797
@maxfun6797 Год назад
Ur sad voice 😂 😂 😂
@monkeyrobotsinc.9875
@monkeyrobotsinc.9875 3 года назад
outro music too ghetto. im not in the hood. im trying to learn. not rob and kill someone.
Далее
Unity ML-Agents 1.0 - Training your first A.I
11:55
Просмотров 112 тыс.
ML-Agents 1.0+ Creating a Mario Kart like AI
16:32
Просмотров 36 тыс.
Can We Save Goku In 5 SECONDS⁉️😰 #dbz #goku
00:15
меня не было еще год
08:33
Просмотров 1,5 млн
AI Olympics (multi-agent reinforcement learning)
11:13
Using AI to Create the Perfect Keyboard
12:05
Просмотров 1,4 млн
Setting up ML Agents for Unity in 5 Minutes
5:41
Просмотров 4,4 тыс.
A.I.  teaches itself to drive in Trackmania
15:04
Просмотров 4,8 млн