Тёмный

How To Use MACHINE LEARNING In Unity MLAgents Setup & Basic Environment - Pellet Grabber Tutorial #1 

Jason Builds
Подписаться 464
Просмотров 17 тыс.
50% 1

Опубликовано:

 

4 окт 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 156   
@emirsahin9102
@emirsahin9102 9 месяцев назад
Between 44:10 and 44:15 you went off-screen to find the issue causing the agent to fall, the "isTrigger" being checked on the agent instead of the pellet, and while you were off-screen you also changed the "Space Size" under "Behavior Parameters" of the agent from 1 to 6, which fixed the "More observations (6) made than vector observation size (1). The observations will be truncated." warning. I had to re-watch a few times to figure out why I was getting the warning and you weren't. Great tutorial, thanks!
@lucasmoulay9301
@lucasmoulay9301 9 месяцев назад
thank you mate 👍
@_Jason_Builds
@_Jason_Builds 9 месяцев назад
Oops, my bad 😅. Any changes I make off camera I try to point out, but I must have missed those, thank you for pointing them out!
@emirsahin9102
@emirsahin9102 9 месяцев назад
@@_Jason_Builds You pointed it out at 48:48 🤣 I guess I just had to be patient.
@_Jason_Builds
@_Jason_Builds 9 месяцев назад
@@emirsahin9102 Better late than never 😅
@HappyDucklingAndFriends
@HappyDucklingAndFriends Месяц назад
this is the first mlagents tutorial i got to work! 10/10
@_Jason_Builds
@_Jason_Builds Месяц назад
That's awesome! I'm glad it worked for you!
@dhengey
@dhengey 5 месяцев назад
Bruv you've just inspired me to take this up. And I have absolutely zero coding experience. Keep making these videos, you might not realise it, but you're making an impact in many lives.. thanks
@_Jason_Builds
@_Jason_Builds 5 месяцев назад
That's awesome! I'm so glad to hear that! I'm sure you're going to make awesome things!
@appl3s0ju
@appl3s0ju Месяц назад
Excellent tutorial. Thank you!
@_Jason_Builds
@_Jason_Builds Месяц назад
I'm glad it worked!
@aeroperea
@aeroperea 7 месяцев назад
very useful tutorial, the python part was hell but after I got all of the stuff working its been really cool learning how to use this. Thank you!
@_Jason_Builds
@_Jason_Builds 7 месяцев назад
I'm glad you got it all working!
@neilfosteronly
@neilfosteronly 7 месяцев назад
Yes worst part if you don't normally use python. I have one set up that I just keep using and don't want to set up another as remember it was a pain to set up.
@Minoraties_In_My_WaterPark
@Minoraties_In_My_WaterPark Месяц назад
Bro is doing God's work out here, best Mlagents tutorial rn, he doesn't skip steps like other tutorials. Great vid!!
@_Jason_Builds
@_Jason_Builds Месяц назад
@@Minoraties_In_My_WaterPark I find it very frustrating when I, myself, follow tutorials and there's things done that aren't explained or shown (I've done that on accident once when debugging mid-recording, oops 🫢), so I try to do my best to show every step and explain how to perform each step 😁. I'm glad you enjoyed the video!
@ShinichiKudoQatnip
@ShinichiKudoQatnip 9 месяцев назад
The latest version of python you can use with the latest mlagents build is 3.10.12, but not 3.10.13 or anything over as of today.
@PingsGolf
@PingsGolf 8 месяцев назад
@imchase8600
@imchase8600 Месяц назад
Hey man amazing tutorial! worked with no issues on the first try. you shouldn't feel like you need to turn off the auto complete though its very useful and helps you spell, but amazing tutorial!
@_Jason_Builds
@_Jason_Builds Месяц назад
@imchase8600 Awesome! I'm glad it worked! I turned off autocomplete when I first got VS because I was taught using Unix/Putty (No autocomplete there) and I found it annoying 😅. I have autocomplete now though, and it's both a blessing and a curse at times haha
@EliaE1337
@EliaE1337 3 месяца назад
its a very very good video thank you so much i dont know how to thank you but i hope its enough THANKS!
@_Jason_Builds
@_Jason_Builds 3 месяца назад
A thank you is more than enough! I'm glad the video helped you out!
@brunogattai9262
@brunogattai9262 5 месяцев назад
I tried to follow Unity's official getting started guide to start with mlagents and ended up wasting 3 days trying to figure out what was going on because I got several errors when installing the python libraries. It worked like a charm for the first time I tried to do so following this video. I can't thank you enough for that, I hope you have so much success because you deserve it!
@_Jason_Builds
@_Jason_Builds 5 месяцев назад
I'm glad it worked out! I don't believe following this video will get the latest MLAgents version anymore (It requires a different python version and I don't believe it's available through the package manager yet), but it still works great and it's what I currently use.
@theashbot4097
@theashbot4097 10 месяцев назад
Cool! Can't wait for more!
@_Jason_Builds
@_Jason_Builds 10 месяцев назад
Your initial starter video helped me so much with getting started in MLAgents, I just wanted to thank you for it!
@theashbot4097
@theashbot4097 10 месяцев назад
@@_Jason_Builds Hey no problem! I was going to make more videos but I hurt my left wrist and type slowly. I have gotten better at typing faster and I am going to plan another video soon!
@_Jason_Builds
@_Jason_Builds 10 месяцев назад
@@theashbot4097 ouch, I hope your wrist is better soon and I can't wait to see what you do next! I've been so busy that I haven't had much time to work on my next project
@eliasnaha4777
@eliasnaha4777 7 месяцев назад
What a great video, thank you a lot, was about to give up but this video helped me a ton !
@_Jason_Builds
@_Jason_Builds 7 месяцев назад
I'm glad it helped!
@jamorantgrizz
@jamorantgrizz 4 месяца назад
Amazing Video !! Thank you!!
@OlekandrAntonesku
@OlekandrAntonesku День назад
Maybe I missed something, but how did you stoped the learning? Because if I stop it from Unity, in console it still waiting, until UnityTimeOutException EDITED: I found, "ctrl" + "C"
@stevenpike7857
@stevenpike7857 11 месяцев назад
I finally got ML Agents to work. First I had to install Python 3.10.12 specifically. Then when I installed ml-agents - I got a wheel error on pypy, so I had to research that and drop it down to a lower version. It was a mess to get working. I used the version they just released this month. Installing ML Agents on the Unity side is a piece of cake. Getting it to work on the Python side is a nightmare. I wish they put more resources into this to make it more contained and automated within Unity. I really love ML Agents.
@_Jason_Builds
@_Jason_Builds 11 месяцев назад
I didn't realize they released a new version this month, I'm using release 20 currently which must be the previous version. I'm glad you got the new version working! I'll look into the new version this week to update my instructions
@_Jason_Builds
@_Jason_Builds 11 месяцев назад
I was experimenting with getting Release 21 downloaded today, and I believe the issues you were facing stem from the Python 3.10.12 version as Python 3.10.3 works for me and, according to their documentation, pip installs Release 21 (1.0.0). I also noticed that Python 3.10.12 doesn't have a supported installer anymore, so perhaps it isn't officially supported in any capacity. Using 3.10.3 I was able to follow the same steps I used here in the video. 3.11.6 didn't work at all with the MLAgents package, so I believe it's a very small selection of Python releases that are supported and that the documentation isn't correct when stating what versions are in fact supported. The Pytorch version they recommended as well didn't seem to be compatible with the python versions I tried.
@stevenpike7857
@stevenpike7857 11 месяцев назад
@@_Jason_Builds Correct, there isn't an installer. I had to use Anaconda and a terminal command to specifically install it. It's the exact version that Unity recommends in their documentation. Other versions would produce a host of errors as soon as I tried to install ML Agents.
@adenzu
@adenzu 11 месяцев назад
yeah its a headache to get it to work, im currently trying to do on my machine for the last few days, liked the video and will check up on it tomorrow
@_Jason_Builds
@_Jason_Builds 11 месяцев назад
@@adenzu Are you trying to set up the new version of MLAgents (Release 21)? If so, this video shows how to set up the previous version (Release 20). I don't believe Release 21 is installed the same way as I show here as Unity only allowed me to import package 2.0.1 while the documentation states the new version's identity is 3.0.0. Using a supported Python version does seem to find the correct python packages at least. I haven't yet installed version 3.0.0 for Unity myself yet, but I may look into updating in the near future.
@inquisitortr7930
@inquisitortr7930 8 месяцев назад
excellent tutorial, thanks man!
@diterpiece9889
@diterpiece9889 3 месяца назад
really great tutorial thx you !
@minnybro2273
@minnybro2273 11 месяцев назад
Nice tutorial, helped a ton
@_Jason_Builds
@_Jason_Builds 11 месяцев назад
Glad it helped!
@mehrdaddowlatabadi2319
@mehrdaddowlatabadi2319 3 месяца назад
good teacher!
@tutorialsandgaminghelp288
@tutorialsandgaminghelp288 10 месяцев назад
You’re the first person I’ve ever seen who prefers to have autocomplete turned off. Here in the web dev world some people, myself included, chooses to go to the extra length of having AI autocomplete as well haha. Just thought that was kinda funny. Anyways, great video. I learned a lot. You definitely earned a sub
@_Jason_Builds
@_Jason_Builds 10 месяцев назад
Haha, I've noticed all my friends and coworkers love autocomplete. I just don't like how much it interferes with my ability to type what I want how I want. It is very nice to have though in some situations, such as with these long function/method names
@thekiler6575
@thekiler6575 3 месяца назад
More observations (6) made than vector observation size (1). The observations will be truncated. UnityEngine.Debug:LogWarningFormat (string,object[]) help what should i do pls i love you but help me pls pls
@_Jason_Builds
@_Jason_Builds 3 месяца назад
Just from this it looks like you need to change the vector observation size of your agent. That can be done in the Behavior Parameters, in the Inspector. At 44:12 I changed the observation size to 6, from 1. I forgot to mention that change in the video itself
@MaryYush
@MaryYush 8 месяцев назад
Can someone please help me why I can't move manually my agent?
@nrgjkrnjgknrjkgbhtr
@nrgjkrnjgknrjkgbhtr 8 месяцев назад
same prob
@Bochnik_Loaf
@Bochnik_Loaf 8 месяцев назад
Facts me too...help
@Pollo-i9z
@Pollo-i9z 8 месяцев назад
There Is some issue in the newest version of mlagents and the Heuristic function that It calls. Hope that @_Jason_Builds can help us
@Penguin1014w
@Penguin1014w 8 месяцев назад
same😢
@AdityaRoyChowdhury-v7i
@AdityaRoyChowdhury-v7i 8 месяцев назад
same for me tooo
@moonknight7564
@moonknight7564 2 месяца назад
I found the agent during training found that it stays in place neither gaining lr loosing points decreasing the mean rewards with time, how doi fix this??
@_Jason_Builds
@_Jason_Builds 2 месяца назад
Typically when that happens you need to adjust the reward/punishment ratio (Or remove punishments entirely). If it's not willing to move at all, that's because it found it is more likely to lose points if it moves than it is to gain. Try removing the timer and punishments and see what happens if given enough time. Through my second project (Zombie Shooter) I learned that the agent learns just as well, if not better, when no punishments are given out as the 'punishment' of not getting the reward is enough to motivate the agent. If this doesn't work, you can alter the agents curiosity in the yaml file (I don't recall which parameter that is), to make it more/less likely to try new things. I hope this helped! Let me know if you have any other questions/comments/concerns/need anything clarified 😁
@ehsanmiraali9751
@ehsanmiraali9751 28 дней назад
Hi. Thanks for awosome video. But i have a problem. It runs on cpu. Can u explain how to make it run on gpu step by step. Thank you. ❤
@_Jason_Builds
@_Jason_Builds 27 дней назад
I did this once before, but I didn't notice any significant difference, so I didn't keep a list of what steps I took 😅. I believe do you need to install a different version of pytorch along with the nvidia cuda tool kit. I haven't followed this video myself, but I believe it should have the steps you're looking for ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-r7Am-ZGMef8.htmlsi=j12pebFEYkkz-fTH
@melonman1252
@melonman1252 7 месяцев назад
My agents are learning to stay still, not sure what you did to fix that issue, could you help me out?
@_Jason_Builds
@_Jason_Builds 7 месяцев назад
I have had this issue when I didn't have a good balance of rewards and punishments as the agent would prefer to stay still with 0 points than fail with negative points. I don't know how your reward system works (Maybe it's the same as I used here), but I'd recommend either tweaking the reward system (You don't necessarily need punishments) or encouraging the agent's curiosity to explore (I haven't messed with this myself, but I recall reading that you can alter this value, possibly in the training YAML). Also, increasing training times may allow your agent to correct this behavior on its own. Later in the series, I implement a timer to encourage my agent to move faster, perhaps something like that may help as it will be punished for staying still too long
@melonman1252
@melonman1252 7 месяцев назад
@@_Jason_Builds My issue was actually the observation space size - id set it to 6 but it wouldnt let me, it was an issue with my config file!
@_Jason_Builds
@_Jason_Builds 7 месяцев назад
@@melonman1252 Oh interesting! I'm glad you got it working!
@mcman407
@mcman407 19 дней назад
Learn how to undo deletes of code. This will let you write your code go through and delete in steps so you can just undo the delete and talk about the code that was just added. Quality of your tutorials will increase cause you can spend more time talking about the code you just pasted in and less apologizing for the mistakes made while live coding. Just food for thought, other than that great tutorial.
@keyhaven8151
@keyhaven8151 4 месяца назад
Wow, thank you very much for your video explanation. The simultaneous training of multiple agents in Unity amazed me! I want to train drones in UE4. Can we achieve the training of multiple drones simultaneously in multiple Envs as shown in your video? The training efficiency of using a single drone for a single Env is very low, and it is difficult to converge in the scenario of training randomized targets. May I ask if there is any way you can help me?
@_Jason_Builds
@_Jason_Builds 4 месяца назад
I recall someone mentioning they were working on a drone project in the comments of one of my videos a while back and I believe they were having success if I recall correctly. You should definitely be able to train multiple drones as I don't see a reason for that not being possible. You may need to tweak the training parameters as it is a more advanced task than what I did here. If you're using physics to fly (i.e. not just floating in the air but actually using vectors and simulated air movement) you will want to set the time-scale to 1 so it runs in real time. It will make training take forever, but it will (Should) avoid physics getting wonky
@keyhaven8151
@keyhaven8151 4 месяца назад
@@_Jason_Builds Using airsim to control drones in UE is controlled through IP, which may mean that I can only control one drone through one of my Python programs and display it in UE, unlike in Unity where multiple "Objects" can be trained simultaneously (which are objects in Unity). I find it difficult to solve this problem.
@_Jason_Builds
@_Jason_Builds 4 месяца назад
@@keyhaven8151Ah, I see. I'm not sure if I can provide any useful input if you're doing this in UE (I didn't catch that in your initial comment, my bad) as I haven't done much with UE. I believe they have something similar to MLAgents, I haven't ever used to looked into it myself
@keyhaven8151
@keyhaven8151 4 месяца назад
@@_Jason_Builds Thank you very much for your answer. I have been researching it recently, but I have not yet found any similar methods
@flyingdumpling750
@flyingdumpling750 7 месяцев назад
The most comprehensible tutorial I 've seen, especially the part of preparing environment, helped a lot. By the way, could I repost this video to a Chinese website called bilibili?I will label your id and the link to this video. : )
@_Jason_Builds
@_Jason_Builds 7 месяцев назад
Sure! And I'm glad I was able to help 😁
@barocon
@barocon 7 месяцев назад
I also had lots of trouble with python. First of all 3.10.12 no longer had an installer so I installed 3.10.11. Then I had numpy errors which troubled me for hours. For me to fix those I installed different numpy with pip install --upgrade numpy==1.23.3 and then edited the setup.py in ml-agents and ml-agents-envs to support that numpy version.
@_Jason_Builds
@_Jason_Builds 7 месяцев назад
I've been hearing that setting up the new version of MLAgents is tricky, I'm glad you got it working though!
@fant227
@fant227 4 месяца назад
Please help me, when I type "mlagents-learn --force" into the command line, I get this error: OSError: [WinError 126] Не найден указанный модуль. Error loading "c:\users\user\game\venv\lib\site-packages\torch\lib\shm.dll" or one of its dependencies. Please tell me, what can I do about it?
@_Jason_Builds
@_Jason_Builds 4 месяца назад
I'm not sure, but are all of your libraries up-to-date and working?
@fant227
@fant227 4 месяца назад
​@@_Jason_Builds But I downloaded all the libraries like in your video and updated them, for some reason it doesn’t work
@_Jason_Builds
@_Jason_Builds 4 месяца назад
@@fant227 Looking at the error message, it looks like it's either an issue with the virtual environment or with a .dll from the torch library. Make sure that .dll isn't missing, if it isn't missing you could try setting everything up again in a new virtual environment or (Perhaps better to try this first) try using one of the examples you can download from the MLAgents github and see if that works
@fant227
@fant227 4 месяца назад
@@_Jason_Builds if you mean the file shm.dll then it is present in this folder, I checked it myself, but for some reason it still does not work
@motakku3423
@motakku3423 5 месяцев назад
hey bro, I'm trying to get into the reinforcement learning area and I really liked how good you are at this!! Can you tell me some tips to help me get an expert at this? I already have made some ML, DP and some RL projects :D (sorry, non English speaker)
@_Jason_Builds
@_Jason_Builds 5 месяцев назад
I'm by no means an expert (I'm fairly new, this was my first project with reinforcement learning actually) and in my opinion what I'm doing is more the user side of machine learning (That opinion may change as I get more experience/exposure) so you may have more indepth knowledge than I do already considering you've done projects! The only three things I can recommend with certainty that will help with the learning process are: 1. Baby Steps. Don't do too much at once, break everything down into bite sized pieces. The videos I have broken down to show quite literally the process I took (Albiet cleaner lol) with each small step I did. This will avoid not only becoming overwhelmed, but it will also allow you to focus solely on each individual critical component. 2. Have fun with it. Do projects that you're interested in or turn several smaller projects into one bigger project (Following the baby steps rule). 3. I'm completely blanking on what the third recommendation was but I think the first two covered it all pretty much 😅. Using MLAgents like I am here, allows me to become familiar with the different parameters and weights that I can adjust/use with reinforcement learning without needing to reinvent the wheel (Which I want to do at some point, not literally though lol) and so it can be a great starting place outside of doing small well-known beginner ML projects
@motakku3423
@motakku3423 5 месяцев назад
@@_Jason_Builds Thanks man, I am exploring more about RL because really there is soooooo much things to do with it and after seeing lots of theory, it's time to start developing such skills. You are really helpful and already is really a great dev! I have been using Unity for around 1 week and I hope to learn more with your tutorials
@miplo6320
@miplo6320 2 месяца назад
23:00 behavior parameters are not showing could you please help me ?
@_Jason_Builds
@_Jason_Builds 2 месяца назад
Did you drag your agent controller script, with Unity.MLAgents, onto the agent? If so, and MLAgents is installed properly, you should be able to add it manually by clicking the Add Component button and searching for behavior parameters
@miplo6320
@miplo6320 2 месяца назад
It helped thanks
@iaxsgames6029
@iaxsgames6029 2 месяца назад
Hey, I don't know if you will se this because this video was posted 8 moths ago but that's besides the point. I have been following another persons tutorials but I have ran into an issue when testing mi MLAngent. See While it is training it stopes after maybe 5 minutes. And I beleave that is because my max_setps, the max steps in the command prompt when you run the code and it gives you the stats before it starts running. I see at 47:04 , your max steps are 500,000. While mine are 50,000. I was wonding if you knew a way to change that? (Edit) Can You make a discord so people don't have to talk though comments?
@_Jason_Builds
@_Jason_Builds 2 месяца назад
Hi! yes there is a way to adjust how many steps your agent will train for. I don't remember off the top of my head how exactly I changed it (I am currently sick and I am having a bit of brain fog trying tt think of this) but I believe it was changed in the yaml file that holds all the configuration settings. Without specifying one during training MLAgents will use default training parameters but if you create the yaml file you can adjust the training values. I am fairly certain that at some point during this project I showed how to create and adjust the yaml file, I am not sure when/where exactly I did that though. Hopefully this all made sense and I didn't just have a sick ramble lol. If you have anymore questions I'd be glad to answer them to the best of my ability (If I'm not sick by then, I should have more coherent responses 😅)
@iaxsgames6029
@iaxsgames6029 2 месяца назад
@@_Jason_Builds If You have a discord That would be great, and Thankyou. I searched up how to change the yaml file and stuff poped up.
@_Jason_Builds
@_Jason_Builds 2 месяца назад
@@iaxsgames6029 I do not have one, I have thought about it before but I don't have the time to be monitoring it myself nor do I have the funds to pay for moderators so I never set one up. And I would rather not host an unmonitored site as that may result in doing the opposite of helping people (People uploading virus' or harassing others and so forth). Maybe one day in the future, but for now it's not probable. I'm glad you found information about the yaml file! I hope altering that file will allow you to address your issue
@ericberg4397
@ericberg4397 6 месяцев назад
trying to follow along. but none of my text is 'green' where your text is. just 'AgentController'
@_Jason_Builds
@_Jason_Builds 6 месяцев назад
Is your script the default script (You haven't changed anything yet)? If so, it sounds like one of two things (That I can think of off the top of my head), either Visual Studio isn't recognizing that C# is being used or the color scheme in Visual Studio was changed. Have you used Unity before and/or tested to see if your scripts run?
@mikhailhumphries
@mikhailhumphries 11 месяцев назад
Thanks bro but I'm looking more advanced topics in ml agents like ragdoll walk being controlled by a brain that can be directed in a direction by player controller. But, nevertheless valuable entry into ml agents video library
@_Jason_Builds
@_Jason_Builds 11 месяцев назад
Would that have the similar effect of using procedural movement for player characters (Like Echo VR's finger placement when grabbing objects)? That sounds like a really cool project!
@dulcinealee3933
@dulcinealee3933 2 месяца назад
Can you use any combination of python and unity versions together?
@_Jason_Builds
@_Jason_Builds 2 месяца назад
@dulcinealee3933 No, the libraries are made for specific versions of python and unity. The mlagents version I use here is slightly out of date (By one version as far as I'm aware), but still works so long as you are fine with using the older version. The newer version uses a different version of python. You might be able to find a different unity version that supports this version of mlagents, but for simplicity I would recommend using the same python and unity version I use here or the same version listed in the mlagents documentation should you want to use a different version of mlagents
@dulcinealee3933
@dulcinealee3933 2 месяца назад
@@_Jason_Builds thanks I didn't think it would be so complicated. My teacher didn't mention any of this to me. He just said if you want to use ml agent here is reference in Unity.
@_Jason_Builds
@_Jason_Builds 2 месяца назад
@dulcinealee3933 Yeah, the documentation for setting up mlagents is fairly lacking imo, which is one of the reasons why I haven't upgraded yet. If this version of mlagents/python/unity works for your class, then you should be able to follow this video and get everything setup without any issues (Hopefully). For my next project, I'll be following this video myself to reset up everything 😅
@dulcinealee3933
@dulcinealee3933 2 месяца назад
@@_Jason_Builds Umm we are not actually studying ML in class. I started with the art stream of the game development course but really wanted to study AI & ML basics and there was one subject in the programming stream which was called AI so I swapped over only to find it was not actually AI but just navmesh and state machines . A little disappointed so now I am trying to either find and do an AI course or self experiment with ML. Maybe even forget about Unity .
@_Jason_Builds
@_Jason_Builds 2 месяца назад
@@dulcinealee3933 I don't recall the names/locatons of the courses, cut I believe Coursera has an AI/ML course and Andrew Ng is typically recommended from what I have seen. Those two might be a good place to begin if you're looking for more indepth stuff. Also, Harvard (Or maybe Standford) offers a free online AI/ML course, which is a good way to learn about the field as well
@AdityaRoyChowdhury-v7i
@AdityaRoyChowdhury-v7i 8 месяцев назад
Hi! Thanks for the video. When i try to attach my script to the "add component" in agents, the "behaviour parameter" does not show up. If I try to add it manually, I am not able to change the name of the behaviour. Any idea as to why unity Is not doing this automatically, as it should. I am a beginner. Any help would be much appreciated.
@_Jason_Builds
@_Jason_Builds 8 месяцев назад
I haven't experienced this myself (The closest for me being the decision requester not being added automatically) so I unfortunately can't provide a solution I know works. Is this a new project you're working on or are you adding this to an existing project? I found a forum about a problem that is similar, but not entirely the same, to yours and it seemed their issue stemmed from updating an existing project. Here's the link to the forum I found, I'm not sure if it will be helpful though github.com/Unity-Technologies/ml-agents/issues/2871#issuecomment-552179615 Have you tried recreating the agent/script? I have found that sometimes when I'm stuck, restarting the portion I'm stuck on is an easier 'solution' than messing with what I already have. In this case it could be something missing in the script or it could be unity itself acting funny, in which case redoing that portion may result in a fix. Also double check (Triple check if already having double checked) that all the libraries are installed and on the correct versions
@AdityaRoyChowdhury-v7i
@AdityaRoyChowdhury-v7i 8 месяцев назад
@@_Jason_Builds Hello! thanks for your reply! i tried again But now I cant get past the command prompt. So my versions are Unity 22.3.15 , ML agent 2.0.1 ( same as the video) . When I install numpy , the version that is getting installed is 1.26.3. However when I do pip3 install mlagents, it says "Building wheel for numpy (pyproject.toml) did not run successfully"
@_Jason_Builds
@_Jason_Builds 8 месяцев назад
@@AdityaRoyChowdhury-v7i Did you upgrade pip before attempting to install numpy? Also, are you doing all the installs in a new virtual environment (Removes the possibility of conflicting package versions)?
@KurayTunc
@KurayTunc 4 месяца назад
Is there anyone else has a problem with heuristic manual move part ? I cant move my agent like in the 44:00 - 46:00
@_Jason_Builds
@_Jason_Builds 4 месяца назад
On the agent controller, make sure you have Heuristic Only selected under Agent -> Behavior Parameters -> Behavior Type. It might be trying to use a brain if you have it set to default, in which case it won't allow any keyboard manipulation. If that doesn't fix it, try using Debug.Log to see if your keyboard inputs are being detected
@AaronBacon_
@AaronBacon_ 3 месяца назад
Just had the same issue and managed to solve it. I missed 40:44 adding the "Decision Requester" Script to the Agent. after adding it manual input now works.
@gabpoersch
@gabpoersch 28 дней назад
@@AaronBacon_ not all heroes wear capes, thank you mate
@manigoyal4872
@manigoyal4872 9 месяцев назад
can't we just create a venv with that particular version and go ahead with it, without disturbing the already installed python version
@_Jason_Builds
@_Jason_Builds 9 месяцев назад
Yes, you should be able to. Just the way Python works, as far as I have experienced anyway, if you don't specify the correct version then you will end up with a version that does not work, which is harder to avoid if you have something like 3.8.2 and 3.8.5 but need to use 3.8.2 (Python will default to the 3.8.5 version since it's the newer of the two). As long as you can differentiate between the two versions in command prompt, then you should be good
@samluffy4542
@samluffy4542 7 месяцев назад
Hi i have another question , no worries if you not answer me but when i use your ratio bonus malus :2/1 the ai always go in one side and when i change the ai d'ont move she try to move the less as possible like in the video . Do you kwon a way to make her always go on the right side.
@_Jason_Builds
@_Jason_Builds 7 месяцев назад
I'm not sure if I'm following, are you wanting to make the agent go only to the right side or are they only going to the right side? If you want the agent to only go to the right and you don't want to remove its ability to go left, you should just have the reward on the right side. If it's only going to the right and you don't want it to only go right and your reward is interchanging sides it appears on, you may just need more training time for it to learn to go to the left if the reward isn't on the right
@samluffy4542
@samluffy4542 7 месяцев назад
@@_Jason_Builds thanks for your answer but it didn't work , I don't know why but however how many time I let it learn the ai always end to go in one side so she get the reward 1/2 time , like always left or always right And when I change the reward the ai does exactly like in the video and stops touching the reward , I'm really grateful for your help it's rare to see a RU-vidr helping the viewer like you.
@_Jason_Builds
@_Jason_Builds 7 месяцев назад
@@samluffy4542 I'm always happy to help! It's the reason why I made this channel 😁. How many steps are you training for? Also, have you tried running any of the examples from the MLAgents github?
@samluffy4542
@samluffy4542 7 месяцев назад
​@@_Jason_Builds I train more than 300000 steps so I don't think it comes from here but I'm gonna try the GitHub 👍
@samluffy4542
@samluffy4542 7 месяцев назад
hey hi have an error when i play who said : More observations (6) made than vector observation size (1). The observations will be truncated. UnityEngine.Debug:LogWarningFormat (string,object[]) can someone help me please
@_Jason_Builds
@_Jason_Builds 7 месяцев назад
In the behavior parameter (On the Agent in Unity), increase the observation variable size. It sounds like you have it too low for the amount of observations you have the agent taking in
@samluffy4542
@samluffy4542 7 месяцев назад
@@_Jason_Builds thanks for your reactivity and your nice video I subscribe
@samluffy4542
@samluffy4542 7 месяцев назад
@@_Jason_Builds it works , thank you 👌
@thekiler6575
@thekiler6575 3 месяца назад
give me the code pls i want the same code
@_Jason_Builds
@_Jason_Builds 3 месяца назад
I do plan, at some point, creating a github to access the project from, but I do not know when I'll get around to that. I do highly recommend following along with the tutorial (Even if it's only the first video) as it covers how to set everything up and provides a foundation to build off of
@Ghareebz
@Ghareebz 7 месяцев назад
doesn't ls is for Linux and Dir is for Windows ?
@_Jason_Builds
@_Jason_Builds 7 месяцев назад
Yes! ls is for Linux and dir is for windows. I recently reinstalled windows and now use dir, I don't recall why I was able to use ls 😅, but it does have a nicer output than dir in my opinion
@itsmemaanas936
@itsmemaanas936 5 месяцев назад
can somebody tell me how to activate venv in the Mac terminal? when I try the above this is coming- zsh: command not found: mlvenvscriptsactivate
@Browniu
@Browniu 5 месяцев назад
try to use: source MLvenv/bin/activate
@itsmemaanas936
@itsmemaanas936 5 месяцев назад
@@Browniu Tried, still doesn't work; zsh: permission denied: mlvenv/bin/activate
@_Jason_Builds
@_Jason_Builds 5 месяцев назад
@@itsmemaanas936I haven't used mac os much, nor have I ever used it with python, but maybe this could help? stackoverflow.com/questions/45554864/why-am-i-getting-permission-denied-when-activating-a-venv Hopefully that helps, I think they are using mac as I see zsh referenced
@itsmemaanas936
@itsmemaanas936 5 месяцев назад
@@_Jason_Builds thank you very much, I got that to work and I ran the mlagent successfully, but it seems to show mean value as -1, when I put the model in my agent it just stands in the same position. (and btw you are the first person I've seen to reply to comments so fast. You just earned a new Subscriber)
@_Jason_Builds
@_Jason_Builds 5 месяцев назад
@@itsmemaanas936I'm not sure why the mean value would be stuck at -1 unless the agent was only receiving a negative reward (Or some sort of error). Have you tried running any of the example scripts MLAgents provides on their github; being able to run those I feel would rule out the possibility of any issues that between mlagents and macos being the cause of the -1 (Thank you! I'm not always able to reply right away, but I try 😁). Also, is the -1 the MEAN REWARD or the STD or elsewhere?
@byeebyte
@byeebyte 4 месяца назад
Are you also dyslexic ? Live how you suffer with certain things that I also suffer
@_Jason_Builds
@_Jason_Builds 4 месяца назад
I don't believe so, at the very least I have never been diagnosed nor have I felt the need to be diagnosed. I just tend to make mistakes when I'm talking, demonstrating, typing, and reading at the same time 😅
@byeebyte
@byeebyte 4 месяца назад
@@_Jason_Builds I guess that how usually is. Unity is hell, documentation sucks (I have never seen a documentation doesn't suck anyway), but its really nice to follow your tutorials . This is one of the first time I am commenting on a video to share my appreciation. Thx mate
Далее
ЭТО НАСТОЯЩАЯ МАГИЯ😬😬😬
00:19
КОГДА НАКРОШИЛ НА ПОЛ #shorts
00:19
AI Learns to Walk (deep reinforcement learning)
8:40
Training AI Bots to Fight (they started dancing)
15:06
Просмотров 292 тыс.
Harder Drive: Hard drives we didn't want or need
36:47
Training an unbeatable AI in Trackmania
20:41
Просмотров 14 млн
5 Hacks to speed up ML-Agents Training in Unity3D
6:24
ЭТО НАСТОЯЩАЯ МАГИЯ😬😬😬
00:19