Тёмный

Create your own A.I. in Unity | ML-Agents Tutorial 2020 

Bot Academy
Подписаться 11 тыс.
Просмотров 25 тыс.
50% 1

In this video I'm going to show you how to create an A.I. in Unity from scratch. Please let me know if you encountered any problems. I'll try my best to help you out.
Resources:
github.com/Unity-Technologies...
Downloads:
Scripts + Configuration:
botacademy.s3.eu-central-1.am...
BallAgent Script:
botacademy.s3.eu-central-1.am...
Camera Script:
botacademy.s3.eu-central-1.am...
Configuration:
botacademy.s3.eu-central-1.am...
Unity project:
github.com/Bot-Academy/BallJump
Find me on:
Discord: / discord
Twitter: / bot_academy
Instagram: / therealbotacademy
Patreon: / botacademy
Credits:
19.50 - End
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
Music: Ansia Orchestra - Hack The Planet
Link: • Ansia Orchestra - Hack...
Music provided by: MFY - No Copyright
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
Contact: smarter.code.yt@gmail.com
00:00 Intro
00:47 General Information
01:16 Install Visual Studio
01:40 Create Project & Install Packages
02:28 Environment Setup
06:28 AI Setup
14:06 Test the environment yourself
15:06 Camera Follow Setup
15:59 Duplicating the Environment
17:33 AI Configuration
18:37 Train the AI
19:37 Run the AI
20:27 Outro

Опубликовано:

 

9 июл 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 132   
@BotAcademyYT
@BotAcademyYT 4 года назад
Wanna stay up to date & join the Bot Academy Community? Then check out my Discord Server: discord.gg/6fRE4DE
@ioda006
@ioda006 3 года назад
This is the best video I've seen about ML-agents! Can't wait to watch your others and learn about the deeper stuff in ML-agents
@Mahdi-ug1qy
@Mahdi-ug1qy 3 года назад
comprehensive, to the point, clear, useful, no bullshit, practical. really fantastic stuff. can't tell you how much bullshit I've come across in youtube trying to understand ML agents. thank you so much. subbed.
@rainshih7542
@rainshih7542 4 года назад
Ya! Outstanding sharing, sit back to the couch and wait for the tutorials of config.yaml & agent script coming~
@elijaheumags5060
@elijaheumags5060 4 года назад
Thanks! This tutorial is a lot more updated and a lot clearer in giving its instructions in regards to how do I create an AI agent than the other YT videos I found. I do wish that you would be able to explain about the parameters in the training config in the next video, but other than that, I give you my thanks!
@Mananaut
@Mananaut 4 года назад
Impressive work! Keeping an eye on this.
@aidanahern6701
@aidanahern6701 3 года назад
For everyone using the latest version of ML Agents use this config file! behaviors: BallAgent: trainer_type: sac hyperparameters: learning_rate: 0.0003 learning_rate_schedule: linear batch_size: 128 buffer_size: 200000 buffer_init_steps: 0 tau: 0.005 steps_per_update: 10.0 save_replay_buffer: false init_entcoef: 0.5 reward_signal_steps_per_update: 10.0 network_settings: normalize: true hidden_units: 3 num_layers: 3 vis_encode_type: simple reward_signals: extrinsic: gamma: 0.99 strength: 1.0
@futuretrunks6927
@futuretrunks6927 3 года назад
Thanks!
@ruvindarathnagoda3549
@ruvindarathnagoda3549 4 года назад
Helpful info. Thanks a lot for posting this !!!
@RandomVideos20244
@RandomVideos20244 3 года назад
Best video on ML Agents. Subscribed 👍
@adriendod
@adriendod 3 года назад
great video! cant wait to apply this to my project! keep it up, subscribed!
@nicolaszulaica6201
@nicolaszulaica6201 4 года назад
Great video !! Helped a lot.
@smexyman9294
@smexyman9294 4 года назад
this is basically just Shadow Clone Jutsu when they found out that training comes to all clones
@matta.f8617
@matta.f8617 3 года назад
Good video, helps me a lot!
@anandkuwadekar4693
@anandkuwadekar4693 4 года назад
Great Job - very well done!!
@BotAcademyYT
@BotAcademyYT 4 года назад
Anand Kuwadekar thanks man :)
@vitotonello261
@vitotonello261 3 года назад
Geiles Video!
@jatinpawar8523
@jatinpawar8523 4 года назад
Great Content. Well explained. Waiting for new videos. Keep up the good work. Love from India.
@BotAcademyYT
@BotAcademyYT 4 года назад
thanks man :)
@aryan_kode
@aryan_kode 4 года назад
loved it
@ako4435
@ako4435 2 года назад
great tutorial
@aryan_kode
@aryan_kode 4 года назад
waiting for the next video
@ticktockworld5895
@ticktockworld5895 3 года назад
Super exploration
@raghuramabl6729
@raghuramabl6729 4 года назад
super👌🙌
@ssssteve5283
@ssssteve5283 4 года назад
Thanks for your greate tutorial. I will expect your next video about all parameters of trainer_confi.yaml with their meaning and effects. I really expect it.
@foolarchetype
@foolarchetype 3 года назад
Thx i did it
@followchilboy992playzontwi3
@followchilboy992playzontwi3 4 года назад
nice to know
@MrTomTeriffic
@MrTomTeriffic 4 года назад
Great Example!! I was looking for a simple model that didn't require knowing much about Unity - I know nothing - to use to compare SAC vs PPO as well as curiosity vs extrinsic rewards. This is just the ticket. And, like others, I'm interested in your approach to setting the hyper parameters. SAC also got up to reward of 950+ in 750K steps but at a much slower initial (hyper parameters were the Hallway example) (BTW -Interesting RL behavior. I set the multiple training areas too close together and the model learned that it could jump to adjacent platforms which slowed down selecting the ramp jumping behavior by a lot. The widely spaced training platforms were mostly trained at around 750,000 steps, while the closely spaced platforms were only at 80% with still high SD at 2M steps. Typical of the accidental behaviors you can get in RL)
@BotAcademyYT
@BotAcademyYT 4 года назад
Thanks :) I‘ll also talk briefly about the differences and use cases for PPO and SAC in the next video. Yes, AI will almost always figure out ways that weren’t intended. Good example with the close environments.
@tadsavage1611
@tadsavage1611 3 года назад
I made an AI. Cant wait to see the rest.
@mickgerritsen1664
@mickgerritsen1664 4 года назад
Thank you for these great videos about ML-agents! They are really helping me alot and I find it very enjoyable. I have a question about the trainer-config: How and where do you specify that you want to use the BallAgent configuration in the trainer_config.yaml file? I am stuck on that one
@BotAcademyYT
@BotAcademyYT 4 года назад
Happy to hear, thanks :) It’s the name that you specified in the Behavior Parameters component.
@mickgerritsen1664
@mickgerritsen1664 4 года назад
@@BotAcademyYT It worked. Thank you! Now I can also start the imitation learning. Really nice
@amalbtrs6255
@amalbtrs6255 4 года назад
Hello ,me i don't find mlgent config in the folder of the project (i have downloaded it from github) 😞
@isabelh140
@isabelh140 3 года назад
Great video! Helped me out a lot with my current project and generally starting with ML in Unity... But I still don't get one thing: if you say 'train more', is it just more training steps? Or do you but your received nn file into the model slot and let it train again? What happens if you do this, does Unity just ignore it?
@BotAcademyYT
@BotAcademyYT 3 года назад
Thanks! Yes I mean more training steps. The model in the model slot will be ignored when training. But there is an argument that you can pass to specify a folder generated by a previous run. If you then also pass --resume, I'll resume a previous training run.
@binyuwang6563
@binyuwang6563 4 года назад
Thank you for your tutorial, it helps me a lot! I have some questions about how to add visual observations to the agent and start training? How can we acquire the camera information in our C# scripts? Besides, it would be great pleasure if you can offer another video to introduce how to train externally (my own RL algorithms instead of the default PPO/SAC in ml-agent). Thanks again!
@BotAcademyYT
@BotAcademyYT 4 года назад
Glad it helped you :) For visual observations, check out this video: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-5y3MD2QqV8E.html Thats a good idea. I wrote it down and will make a video about it - might take some time though. You might wanna check out this docs file: github.com/Unity-Technologies/ml-agents/blob/master/docs/Python-API.md
@binyuwang6563
@binyuwang6563 4 года назад
@@BotAcademyYT Appreciated for your help!
@BlackSheeeper
@BlackSheeeper 4 года назад
top
@ReinkeDK
@ReinkeDK 4 года назад
Really good video. I am going to follow it one of the next days. Could you please clarify what the 'branches' are and why size 2 and 3? If mentioned in video 1, i admit to skipping it, since i already had the environment up and running :)
@BotAcademyYT
@BotAcademyYT 4 года назад
Thank you! Wasn't mentioned in the first video. Sure, you can think of a branch as a single decision the agent has to take every step. So if we have two branches, the agent has to make two decisions. For the first branch the agent can choose between zero and one (in the code we map the decision 0 to 'don't accelerate' and 1 to 'accelerate'). If we would want the agent to accelerate backwards we have to set its size to 3 and then map the value 2 to the force -1 in the code (so that he accelerates backwards when choosing the 2). The second branch has the size 3 cause the agent can: 0: don't move left or right 1: move in one direction 2: move in the other direction So for example if the agent chooses the value 1 for the first branch and 0 for the second he would accelerate straight forward without changing its direction.
@ReinkeDK
@ReinkeDK 4 года назад
@@BotAcademyYT Thanks for your reply. Could i think of branches as some sort of output neuron in the network? How do i specify the size of the neural network ? like number of hidden layers and number of neurons in them?
@BotAcademyYT
@BotAcademyYT 4 года назад
yes exactly! The number of branches and their sizes determine the number of output neurons. You can specify both the number of hidden layers and number of neurons via the config. If you head to the config of the BallAgent in my video, you can see those two config parameters.
@ReinkeDK
@ReinkeDK 4 года назад
@@BotAcademyYT Thanks a lot. Looking forward to get time to play with it :) p.s. WHen is episode 3? ;-)
@BotAcademyYT
@BotAcademyYT 4 года назад
You’re welcome :) Might take up to 4 weeks cause of ongoing semester projects.
@Mahdi-ug1qy
@Mahdi-ug1qy 3 года назад
do you have the config file for release 11 of mlagents? in my case i am stuck at 18:54 with the following error on the command prompt: mlagents.trainers.exception.TrainerConfigError: The option default was specified in your YAML file, but is invalid. I think it has something to do with the difference in versions but I can't seem to make it work
@Mahdi-ug1qy
@Mahdi-ug1qy 3 года назад
nvm bro. fixed it manually. great video!
@bobingstern4448
@bobingstern4448 3 года назад
I'm still confused about what the branches do in the Behavior script
@alvaromachucabrena97
@alvaromachucabrena97 4 года назад
Thanks for the vídeo !! I have a question. How can I edit the parameters of the neural network in jupyter notebook? Where is the file .py where is created it?
@BotAcademyYT
@BotAcademyYT 4 года назад
You can change some of the parameters like the number of layers and neurons per layer through the config - check out my third ML-Agents video where I talk about the parameters. I hope that answers your question, if it doesn't and you really want to change the network architecture on a deeper level, let me know.
@micabarshap
@micabarshap 4 года назад
First I want to thank you of your wonderful explanation and video. I have a question , In the first video (setup) the directory of the unity project was in the ml-agents release so it make sense that the virtual environment know who and where is the ml-agent for produce the nn brain . But when I open a new project for unity in other place*** on the computer, how the interface (with the train_config.yaml) of python know were is my unity project with new ml-agent? Can you point which file(or property) point exactly the connection with my python interface to TensorFlow. Thank
@BotAcademyYT
@BotAcademyYT 4 года назад
Thanks :) Let me see if I understood the question correctly: You want to know how the ml-agents python program that we start with 'ml-agents learn ...' is able to find the unity project to connect with it? If that is the question, I think that it gets access to the resources through the connection on port 5004 when we start the environment. I think that opens up a socket connection between unity and the python program under the hood. Please let me know if that wasn't the question.
@micabarshap
@micabarshap 4 года назад
@@BotAcademyYT Yes, you understand me correctly and answered , Thank, And my problem is the the remark "Could't connect to trainer on port 5004 ... " It seem to me that it is my fault of setup ( video 1) I hope to solve it (is there a clue) Thanks anyway . I looking eagerly to the next video about the explanation of parameters in train_config.yaml . Great Job.
@BotAcademyYT
@BotAcademyYT 4 года назад
You’re not the first one with this issue: 1. Make sure that the behavior type is set to default as seen in the video at 16:17 2. Make sure to run the environment directly after running the ml-agents learn command - if you wait to long, it'll time out Does that help? Otherwise you can download my repository with the exact unity environment as shown in the video via the link in the description. Might also help.
@thatguyplayz0nmobile471
@thatguyplayz0nmobile471 3 года назад
What if I'm not using rigid body? Would this still work OK if I just constantly move the agent's y coord everytime it's ungrounded?
@BotAcademyYT
@BotAcademyYT 3 года назад
Yes, rigidbody applies gravitation to the agent
@thatguyplayz0nmobile471
@thatguyplayz0nmobile471 3 года назад
@@BotAcademyYT ok thanks. I'm very new to ML-AI even tho my first language was python
@MrTomateSalat
@MrTomateSalat 4 года назад
First of all: please make the video about the config next. I spend a lot of time trying to understand those parameters but only got a few of them. And I also would like to know how I could reflect if a certain behavior might come from one of those parameters being wrong. Next I've a question: Is it really necessary that the ball knows the velocity? I understand why you add it but I actually would've expected the AI to figure this out because of its position changes. I guess my expectation is wrong - because an AI which I tried seems to not be aware of that. But I'm still confused about what the AI really has to know and what it can figure out on its own. Also kind of interesting is that you didn't add a negative reward for falling of the platform. I noticed that rewards have a big impact on AI. I think its planned but having a video about how to proper use rewards would ofc also be very helpful.
@BotAcademyYT
@BotAcademyYT 4 года назад
Thanks for your comment / thoughts! 1. As always. It depends. With a normal Neural Network, the agent doesn't remember its last states. It just acts based on its current observations. If we would use a Recurrent Neural Network (for example an LSTM) we wouldn't need the velocity cause the agent can remember its previous position(s). There is a configuration for this called "use_recurrent", I haven't tried it on this example though. 2. Good point. The reason I didn't chose a negative reward when falling is that I think it is to risky at the start of training for the agent to actually jump. The agent will most likely miss the cube multiple times resulting in a negative reward. So the agent might be ok with not moving at all to avoid the negative reward. What you could do though is giving the agent a small negative reward every time step - but you should only add this if you want the agent to reach the cube as fast as possible. If that is not the case - keep it simple (always a good choice for the reward setting)
@MrTomateSalat
@MrTomateSalat 4 года назад
​@@BotAcademyYT And thank you for your answers :-) 1. Ok that makes things more clear. I just looked up the docs ( github.com/Unity-Technologies/ml-agents/blob/master/docs/Training-Configuration-File.md#memory-enhanced-agents-using-recurrent-neural-networks ). But am I right that this is only related to actions? Because I think my confusion came from the decision requester. I expect that the agent will collect observations for e.g. 5 steps and make it's decision based on that. I haven't distinct between observations and actions so I just thought: actions are also memorized. But at least now I think I understand. Using this use-recurrent seems to be expensive. Just passing its velocity seems to be the better approach. 2. Well - at least here it seems that I've been on the right track in the end (I guess the ai trained me more than vice versa). Again: thank you very much for your answers. This topic is really interesting and really really complex. I'm thankful for every information I can get :-)
@BotAcademyYT
@BotAcademyYT 4 года назад
You're welcome :) 1. I think it is also related to observations. Cause from the theoretical / mathematical standpoint, the Agent never receives a sequence. Just the current observations (9 in this case). So I assume that when I set the decision requester to 5, the agent will receive the observations from the frame before he has to take an action again and not from the last 5 frames. Yes its indeed quite complex and hard to find good resources about it.
@MrTomateSalat
@MrTomateSalat 4 года назад
​@@BotAcademyYT I think you are right. I checked the docs and also other YT-Videos which I've previously have seen. I actually had in mind that the guy said: "put it to 5 and it will collect till step 5". But he didn't - I had it completely wrong in mind. I think what I wanted is "Vector Observation / Stacked Vectors". Actually I wanted to not continue with my latest AI (cause since days I don't do any game dev anymore - only AI). But now the project is open again and I have to try things out ^^.
@BotAcademyYT
@BotAcademyYT 4 года назад
Stacked observation sounds like what you meant. I haven't tried them out and don't know how they work in connection with the decision requester to be honest - added it to my list of things to try out. Enjoy working on your project!
@salmagabrabdelghanygabr174
@salmagabrabdelghanygabr174 4 года назад
i am trying to make my own rl environment but i am still confused on what would be considered as my observation keeping in my mind what i want to make is that my agent will move up or down or left or right or in or out so 6 actions in total according to what the rl tells it then it gets bad reward if it gets hit by the walls but if it didn't get by anything it gets a good reward and even a better reward if it collided with the good colliders that i have put
@BotAcademyYT
@BotAcademyYT 4 года назад
I would use Raycasts as observations. The ML-Agents Hummingbirds course in the Unity Learn Platform which is still free at the moment covers Raycasts
@accelwatchingfun9560
@accelwatchingfun9560 4 года назад
Hello Very very very cool Video ^^ Love it But i have a few questions (I am a total beginner....as well as for coding XD) But could you do a video about an ai with Raycast? And how could i do it that the ball can also roll backwarts? Well....how did you learn to use ML-Agent? And how complex can you code with ML-Agent? Could you maybe give tips for a total beginner? Like Books you recommand or something like that?
@BotAcademyYT
@BotAcademyYT 4 года назад
Hey. Thanks! 1. Yes, will create a video about that at some point this year. 2. Uhm, you need to change the vector size from 2 to 3 (in Unity Inspector) and then adjust the code like I did for left and right. That should be it. 3. I just taught myself when creating those videos (no prior experience to be honest). I think I can understand it so quick cause of my computer science background 4. You can go as complex as you like :) 5. If you want to take this serious, I'd start with the basics (learn C#, learn Unity Basics and then start looking at ML-Agents) When I first started with coding 5 years ago, I tried Android app development and was quickly procrastinating cause it was all soo complex (like ML-Agents for you I guess). Now I know that Android app development 5 years ago required good Java knowledge. So I should've started with Java instead of Android. There are great videos about C# and about Unity on RU-vid and other websites. Start there, keep going step by step and you'll do great.
@CreepyfishBOY
@CreepyfishBOY 3 года назад
Hello. I am new to Unity ML. I am also confused about observations. So I have a different project and when I set my observations similar to yours, I get a nullreference exception. Any idea of how to fix this? Also great video by the way!
@BotAcademyYT
@BotAcademyYT 3 года назад
Hey, thanks! Not sure why this happens. I would need to see the observation space and code snippet. If you have Discord, you can post it there so that the other users and I can take a look.
@Joshua-dl3ns
@Joshua-dl3ns 3 года назад
I'm getting an error in Ball Agent Logic script while importing UnityEngine.MLAgents saying ML Agents doesn't exist in the namespace UnityEngine
@BotAcademyYT
@BotAcademyYT 3 года назад
That’s weird. Make sure that you wrote it correctly and that you have installed ml-agents in Unity. If it’s still not working, please join the discord server and post the problem there with a few screenshots.
@NbaLoveTurkiye
@NbaLoveTurkiye 4 года назад
Thanks for the great tut but can you explain what these part is doing : Vector3 controlSignal = Vector3.zero; controlSignal.x = vectorAction[0]; if(vectorAction[1] == 2) { controlSignal.z = 1; } else { controlSignal.z = -vectorAction[1]; } if(this.transform.localPosition.x < 8.5) { rb.AddForce(controlSignal * speed); }
@BotAcademyYT
@BotAcademyYT 4 года назад
Sure. We get two values from the Agent (vectorAction[0] and vectorAction[1]). For the first branch (vectorAction[0]), the agent can choose between zero and one (in the code we map the decision 0 to 'don't accelerate' (controlSignal.x = 0) and 1 to 'accelerate' (controlSignal.x = 1). If we would want the agent to accelerate backwards we have to set its size to 3 and then map the value 2 to the force -1 in the code (so that he accelerates backwards when choosing the 2). The second branch has the size 3 cause the agent can: 0: don't move left or right 1: move in one direction 2: move in the other direction We need to map those values (0, 1, 2) to valid forces (-1, 0, 1). That's whats happening in the rb.AddForce line. It's important to understand that the Agent just chooses random numbers (in the allowed range) at start of training and we need to move the agent based on those numbers (thats what this part of the code is doing). That way the agent can learn what happens to him when choosing a specific number. Hope I could make it clear, if not feel free to ask again.
@yoosunma5472
@yoosunma5472 4 года назад
@@BotAcademyYT Thanks for this great tutorial. This tutorial is very useful for ml-agent beginner like me. Can I ask you more detail about these part? I can't understand why the first branch(vectorAction[0]) can choose between zero and one. In the code, I have no idea about the decision 0 to 'don't accelerate'.
@tropictank69
@tropictank69 4 года назад
for some reason the ml agents and unity engine libraries in my c# script are not being recognized? any idea why?
@BotAcademyYT
@BotAcademyYT 4 года назад
I've only seen this so far when the Unity extension was not added to Visual Studio (see video at around 1:35). If it's still not working, I'd try reinstalling.
@nicolaszulaica6201
@nicolaszulaica6201 4 года назад
Have you added the packages? What do you mean by not recognized? -> you get a console error when runing the scripts: maybe you forgot the using ML-Agents on the Script -> you dont get autocompletition: maybe you dont have the unity package for Visual Studio Or maybe there is another error in your Script, are you using a self-made one?
@rohitk6817
@rohitk6817 4 года назад
How can I use python instead of c# to train the model?
@BotAcademyYT
@BotAcademyYT 4 года назад
U can't. The logic (and all other logic for your Unity environment) needs to be C#. With Python you can only change the reinforcement learning algorithms implemented in the mlagents python libraries (or create & add new algorithms).
@joecanino2412
@joecanino2412 4 года назад
Can somebody experiment this ??? when ball jumps the ramp falls it is not reset to its stating position! ? The ball stock in a loop... the loop keeps updating the Target position while ball falls slowly? I have revise my code many times and i can find whats causing the problem or bug. Maybe i missing something... Please any help here will be appreciated.
@BotAcademyYT
@BotAcademyYT 4 года назад
I would recommend downloading the version that I created in the video (link is the description) and try if it works with it. If so, try comparing it with your version to find the problem.
@joecanino2412
@joecanino2412 4 года назад
@@BotAcademyYT thanks i found what was the problem... In the OnEpisodeBegin() for mistake i referenced my target (transform) local position instead of the (transform) local position of the ball itself causing the ball to have an incorrect starting local position OnEpisodeBegin method keeping updating the ball out of position (outside the floor) so it keeps falling down while updating the Target Position rapidly at same time... Thanks for the rapid Reply.
@micabarshap
@micabarshap 4 года назад
Does it matter if I use python Virtual environment not not python Conda?
@BotAcademyYT
@BotAcademyYT 4 года назад
It shouldn’t matter if you’re familiar with python and know what you’re doing. Steps are basically the same
@ewwkl7279
@ewwkl7279 3 года назад
It's a great tutorial. Thank you so much. I followed almost every steps in the tutorial except the configuration part. I've put Behavior Name to 'BallAgent' and replaced my trainer_config.yaml to yours and put the command 'mlagents-learn config/trainer_config.yaml --run-id=test --train' . I've got a TrainerConfigError: The option default was specified in your YAML file, but is invalid. I When I typed 'mlagents-learn' it works but I don't think it's optimal Plz guide me how to fix it or what I might went wrong.
@BotAcademyYT
@BotAcademyYT 3 года назад
seems like you're using a different version than I. So double check that you used Release1 with the correct mlagents unity version (the one shown in the video). If you're using a more modern version, check the comment section. People have shared an updated config file in there.
@planetdilien2932
@planetdilien2932 4 года назад
Instead of writing the script couldn't you have attached the camera to the BallAgent?
@BotAcademyYT
@BotAcademyYT 4 года назад
Thats a good point and actually what I did first. But since the ball is rolling the camera rolls too which is not really what we want 😅
@dhyeythumar
@dhyeythumar 4 года назад
I have been working with ML-Agents for a week now and I have a doubt regarding this: I want to display the Step count (which is outputted in the command line) to the game screen. So for this, I did use the value given by the academy code as [Academy.Instance.TotalStepCount] but while training this value doesn't match with the value which is outputted in cmd. Do you have any idea why is this happening or am I accessing the wrong value?
@BotAcademyYT
@BotAcademyYT 4 года назад
very good question. Tried it myself and had the same issue. I'll ask the MLAgents Team about this. Will let you know their answer.
@dhyeythumar
@dhyeythumar 4 года назад
​@@BotAcademyYT Are you going to open a Github issue on their repo.? because I think many people would have the same question. And thanks for your videos, it helped me to set up the ml-agents smoothly.
@BotAcademyYT
@BotAcademyYT 4 года назад
Thank you, glad that it helped you :) Yes, I tried to open an issue but was redirected to the MLAgents forum when I selected "Question". I think they just want bug reports & feature requests as github issues and I am not 100% if it is really a bug. So I asked in the forum.
@dhyeythumar
@dhyeythumar 4 года назад
@@BotAcademyYT Let's wait till the reply comes from their end.
@BotAcademyYT
@BotAcademyYT 4 года назад
no answer so far. Just opened an issue on Github to get feedback from the dev guys. github.com/Unity-Technologies/ml-agents/issues/4125
@ArunKumar-sg6jf
@ArunKumar-sg6jf 3 года назад
I want to code on my own on mlagents how to do
@BotAcademyYT
@BotAcademyYT 3 года назад
you mean without using their PPO / SAC algorithms? If so, they use Tensorflow. So you can write your own algorithms and connect them via the mlagents python connector
@xingranruan708
@xingranruan708 3 года назад
Hi! Thanks for your video again. I got one error when I press the play button: The object of type 'Transform' has been destroyed but you are still trying to access it. Your script should either check if it is null or you should not destroy the object. Do you know how to fix this problem? Ps. my unity version is 2019.4.25f1; Ml-agent 1.00; Probuilder 4.23.
@xingranruan708
@xingranruan708 3 года назад
PPs: In my Unity, my target can move one step but after that 'Bake paused in play mode' displays at the bottom right corner and everything shut down. Then I click several of the times the play button, that error displayed at the console.
@BotAcademyYT
@BotAcademyYT 3 года назад
I‘ve never had this issue tbh. Can you double check that everything is set up correctly?
@xingranruan708
@xingranruan708 3 года назад
@@BotAcademyYT Hi, I checked again and build another BallJump project, but still, it does not work. I upload my project on GitHub github.com/XRR422/BallAgent-Jump.git; could you help me solve it? if possible. Save me, please. Thank you!!
@xingranruan708
@xingranruan708 3 года назад
@@BotAcademyYT Hi, I rebuild everything on my Windows Desktop again. It fails again and displays 'couldn't connect to trainer on port 5004 using API 1.0.0'. Any solution for this error? Thanks mate!!
@BotAcademyYT
@BotAcademyYT 3 года назад
@@xingranruan708 API version 1.0.0 is different to the one I used in the video if I remember correctly. Please make sure that the mlagents python version and the mlagents unity version are compatible.
@jinhaoli5555
@jinhaoli5555 2 года назад
Thank you for your tutorial, it helps me a lot! I have some questions about it. f'Trainer config must have either a "default" section, or a section for the brain name ({brain_name}). ' mlagents.trainers.exception.TrainerConfigError: Trainer config must have either a "default" section, or a section for the brain name (RollerBall). See config/trainer_config.yaml for an example.
@alvaromachucabrena97
@alvaromachucabrena97 3 года назад
Hello ! I am challenging an FPS video game and I want my enemies to chase me. My enemies would be my agents, but how can I modify your code so that my enemies train to follow me?
@BotAcademyYT
@BotAcademyYT 3 года назад
You'd need a modify quite a bit. This video might help you: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-5OVf3FBHzHI.html
@alvaromachucabrena97
@alvaromachucabrena97 3 года назад
@@BotAcademyYT Thank you very much. I am making a video game with ML / agents for my thesis. I have one last question. What type of neural network do Ml Agents use? Do you use a convolutional neural network, a recurring neural network?
@rrkatamakata7874
@rrkatamakata7874 3 года назад
thanks alot but can we use python to code logic
@BotAcademyYT
@BotAcademyYT 3 года назад
the agent logic that I showed is just possible in C#. If you want to change the underlying reinforcement learning algorithms, you can do it in python through the ml-agents python connector
@rrkatamakata7874
@rrkatamakata7874 3 года назад
@@BotAcademyYT do you have any resources for that and thx
@BotAcademyYT
@BotAcademyYT 3 года назад
@@rrkatamakata7874 this should be a good starting point: github.com/Unity-Technologies/ml-agents/blob/master/docs/Python-API.md
@rrkatamakata7874
@rrkatamakata7874 3 года назад
@@BotAcademyYT thank you so much you are wonderful person
@planetdilien2932
@planetdilien2932 4 года назад
Next id like to see AI competing. instead of 20 worlds with one AI each you would have one world with AI's interacting in the same step.
@BotAcademyYT
@BotAcademyYT 4 года назад
thanks for letting me know. I'll make a video about it. Might not be the next one though. Currently working on a video where I go through the configuration parameters.
@rushipatel5085
@rushipatel5085 4 года назад
nice get me a coffee
@carterburtis9686
@carterburtis9686 3 года назад
Ur goons kill my laptop
@HollyRivay
@HollyRivay 4 года назад
I have a big problem, I don't know how to solve it. Version Of "Unity": 2019.3.15f1 (HDRP Project), version "ml-agents" - Release 2, Python-3.7, com.unity.ml-agents@1.0.2, Anaconda version - last; Error: "Couldn't connect to trainer on port 5004 using API version 1.0.0. Will perform inference instead". The firewall is disabled, and port 5004 is opened in the router. The provider does not block it. I did everything according to your instructions from 1 video clip(I did not install the graphical shell for "Anaconda"). I tried to do everything according to the official instructions, but again it did not work. What might be the problem??? Screenshot: imgur.com/2h4pY0p I'm from Russia, so I watch your videos using subtitles, and read official tutorials using translations. Maybe I did something wrong, but everything in your video I repeated exactly.
@BotAcademyYT
@BotAcademyYT 4 года назад
there are only two things I can think of: 1. Make sure that the behavior type is set to default as seen in the video at 16:17 2. Make sure to run the environment directly after running the ml-agents learn command - if you wait to long, it'll time out If you are 100% sure that 1. and 2. are not solving the issue you could try using the exact versions that I used, that is: Python 3.7 Release 1 com.unity.ml-agents@1.0.0
@HollyRivay
@HollyRivay 4 года назад
@@BotAcademyYT Thank you for the answer, 1 and 2 point I already checked. I will try to put release 1. I really hope that will help, because "ml-agents" is really a very cool thing.
@BotAcademyYT
@BotAcademyYT 4 года назад
It definitely is :) If changing to release 1 isn't working, you could try downloading my repository (link in the description) with the exact setup created in the video, import it and try again. That way we can make sure that you are using the exact same environment & agent.
@HollyRivay
@HollyRivay 4 года назад
@@BotAcademyYT This is some kind of magic, it really worked. But why? Well, thank you. Next, I will try to figure out what my mistake is.(I put your "BallJump-master")
@BotAcademyYT
@BotAcademyYT 4 года назад
good that it is working with my code. So your setup is correct. I would check the logic code first, if it is the same or if there is a mistake. Let me know if you figured out what it was :)
@jaydevsolanki1047
@jaydevsolanki1047 4 года назад
This requires whole lot of maths. I quit. LOL
@SDFTDusername
@SDFTDusername 4 года назад
a