Тёмный

How to install Unity ML Agents Release 19 in 2022 and build your own Machine Learning project 

Philipp Dominic Siedler
Подписаться 1,6 тыс.
Просмотров 19 тыс.
50% 1

Опубликовано:

 

17 окт 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 100   
@keyhaven8151
@keyhaven8151 3 месяца назад
I have always had a question about mlagents: they randomly select actions at the beginning of training. Can we incorporate human intervention into the training process of mlagents to make them train faster? Is there a corresponding method in mlagents? Looking forward to your answer.
@SignalYT24
@SignalYT24 2 месяца назад
you are from unitycodemonkey?
@xSIXAX
@xSIXAX Год назад
Thanks for the little introduction. It was very stupid to get all the right versions done but at the end it works and ill try out some stuff for my own projects :)
@PhilippDominicSiedler
@PhilippDominicSiedler Год назад
Glad I could help!
@charlesAcmen
@charlesAcmen 5 месяцев назад
maybe you wont notice this comment,but i have to persist to say thank you bro,i successfully configured my mlagents env,but unfortunately due to new edition of unity or something else,i cant address all the errors mentioned in the console,but yeah,thank you again!
@ishan-singla
@ishan-singla Год назад
As of 2nd April, 2023, if anybody's getting PROTOBUF error or something like this while trying to train mlagents, run the command with environment activated "pip install protobuf==3.19.6"
@ludovicchevroulet9955
@ludovicchevroulet9955 Год назад
REALLY THANKS YOU !!
@zxk
@zxk Год назад
Thank you
@NA-jy4zd
@NA-jy4zd Год назад
tysm that worked
@mohibovais8064
@mohibovais8064 Год назад
hi bro i need some help on this
@mohibovais8064
@mohibovais8064 Год назад
TypeError: Invalid first argument to `register()`. typing.Dict[mlagents.trainers.settings.RewardSignalType, mlagents.trainers.settings.RewardSignalSettings] is not a class. i am getting this error
@AordatusExtra
@AordatusExtra Год назад
I am using 2021.3.22f1 and I am getting The type or namespace name 'Newtonsoft' could not be found (are you missing a using directive or an assembly reference?)
@mlbbgaming8963
@mlbbgaming8963 Год назад
i am getting the same error
@vitotonello261
@vitotonello261 Год назад
richtig geil. Hatte alles wieder vergessen :D
@karost
@karost Год назад
just try Release 20 by follow your gide and it work! , Thanks
@10n02com
@10n02com Год назад
Thanks for this great tutorial. Minor detail, if it's the first time, you may have an error "Module onnx is not installed!". Just resolve it with : pip3 install onnx
@PhilippDominicSiedler
@PhilippDominicSiedler Год назад
Never seen that, but thanks for mentioning it!
@NA-jy4zd
@NA-jy4zd Год назад
yes that worked thanks
@throwoff5769
@throwoff5769 Год назад
Whenever I try to import Ml Agents folder in Project into the unity assets folder, it doesn't work it doesn't do anything absolutely anything
@goldenglowmaster8510
@goldenglowmaster8510 8 месяцев назад
How would you install and train sac instead of ppo
@nurhamidaljaddhi2338
@nurhamidaljaddhi2338 Год назад
Thank you for this aamazing video, really enjoy to follow. I already follow the instruction. However, I face a problem. When I start training the agent, the folder summaries cannot generated, so that I cannot move to visualize the result using tensorboard. Is there any steps I missing? Thank you.
@PhilippDominicSiedler
@PhilippDominicSiedler Год назад
Hard to tell what the problem might be! Do you see training step progress in the terminal?
@Or1m
@Or1m Год назад
Thank you very much. Great video. You helped me a lot. Only problem was with current numpy version, so I had to downgrade it to 1.23.3 but everything else is up to date.
@PhilippDominicSiedler
@PhilippDominicSiedler Год назад
that is correct, it is indeed quite annoying. I had to downgrade as well and I'm using 1.21.5 by now
@andrestorres-iy9je
@andrestorres-iy9je Год назад
Muchas gracias, tenia que usar esto para un proyecto, me salvaste.
@eliudgonzalez2352
@eliudgonzalez2352 Год назад
Gracias bro... aunque antes había hecho varias preguntas, paso a paso las fui resolviendo, desde hace rato quería aprender a y utilizar MLAgent, pero siempre había un problema, pero este video me ayudo de muchas gracias, espero en un futuro puedas dar ejemplos haciendo proyectos desde 0. Un gran abrazo
@mikhailhumphries
@mikhailhumphries 2 года назад
Will you create more advanced videos on unity ml agents?
@PhilippDominicSiedler
@PhilippDominicSiedler 2 года назад
Probably yes :)
@mikhailhumphries
@mikhailhumphries Год назад
@@PhilippDominicSiedler the anaconda gave errors so i used code monkeys version by just downloading python 3.9.9 and using default command prompt
@PhilippDominicSiedler
@PhilippDominicSiedler Год назад
Great that you got it to work - which code monkey tutorial are you talking about, just so I can make sure if I missed something. Thanks!
@mikhailhumphries
@mikhailhumphries Год назад
@@PhilippDominicSiedler ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-zPFU30tbyKs.html the tutorial is "how to use ml agents in unity" by code monkey
@PhilippDominicSiedler
@PhilippDominicSiedler Год назад
@@mikhailhumphries Right, so code monkey is using venv, I'm using conda through Anaconda. When installing Anaconda, as explained in the video, python 3.9 is automatically installed, not sure if that is the same for venv.
@mikhailhumphries
@mikhailhumphries Год назад
We need to load models that we download in runtime from a webserver, and we execute it in a build (not in Editor), is it possible? All the examples and code I find only work within Editor using the Resources Folder and the walk scene only works in the editor and we i try to build it, it reverts back to the default ONNX Model.
@PhilippDominicSiedler
@PhilippDominicSiedler Год назад
Yes it is possible. Here you can find the documentation of the low level API: github.com/Unity-Technologies/ml-agents/blob/main/docs/Python-API.md
@Bosko_Ivkovic
@Bosko_Ivkovic Год назад
How did you stop the training at 9:15?
@PhilippDominicSiedler
@PhilippDominicSiedler Год назад
ctrl+c
@NoRemorse0ddly
@NoRemorse0ddly Год назад
I keep getting the error: AttributeError: module 'numpy' has no attribute 'float' any idea what I can do to fix it?
@PhilippDominicSiedler
@PhilippDominicSiedler Год назад
its a problem with mlagents release 19, apparently it has been fixed in release 20. can be ignored..
@bitbison6305
@bitbison6305 Год назад
pip install "numpy
@toxian6092
@toxian6092 Год назад
when I add mlagent example asset to unity it said " The type or namespace name 'Newtonsoft' could not be found (are you missing a using directive or an assembly reference?)" how to fix it?
@PhilippDominicSiedler
@PhilippDominicSiedler Год назад
Haven’t had that in a while. Are you on windows or Mac?
@toxian6092
@toxian6092 Год назад
@@PhilippDominicSiedler thank you for answer .but nvm I'm already fix's that it's cause by that package is missing from unity.
@PhilippDominicSiedler
@PhilippDominicSiedler Год назад
@@toxian6092 awesome, how did you fix it? installed the package from the package manager? Just in case someone else has the same problem :)
@toxian6092
@toxian6092 Год назад
@@PhilippDominicSiedler so .I start at open package manager and add these link "com.unity.nuget.newtonsoft-json@3.0" after click add package from git url, that's it
@PhilippDominicSiedler
@PhilippDominicSiedler Год назад
@@toxian6092 thanks!
@eliudgonzalez2352
@eliudgonzalez2352 Год назад
bro a query, how would I view the tensorflow statistics of the trained agents?
@PhilippDominicSiedler
@PhilippDominicSiedler Год назад
you need to be in the ml-agents folder directory as I have been showing and then use this command: tensorboard --logdir=results, there will be a URL that you can open in your browser
@AbstaartKardman
@AbstaartKardman Год назад
Be very observant on what version you have on everything you install. It does not always install the exact version you expect it to. My "file version" in the explorer(win) of python exe is 3.10.8150.1013 im not sure if this was the source of the problem but if the "file version" is an expression of the exact version of the program then 3.10.8150.1013 i just above the limit of an acceptable python versions for the mlagent package i think. I installed python 3.8 (.16) and it finally could build wheel with gym .toml files. the numPy version was 1.14.1 which just came automatically when i created a new conda environment with python 3.8.
@PhilippDominicSiedler
@PhilippDominicSiedler Год назад
That’s how I did it, right? :)
@AbstaartKardman
@AbstaartKardman Год назад
@@PhilippDominicSiedler that comment was about details circuiting your solution. So yea
@randygove7049
@randygove7049 Год назад
Thank You!
@bikelife_gus319
@bikelife_gus319 Год назад
release 20 is out and idk how to install it.
@bikelife_gus319
@bikelife_gus319 Год назад
nvm i figured it out
@romanlovatoes
@romanlovatoes Год назад
@@bikelife_gus319 how?
@punygods2093
@punygods2093 Год назад
nice video, thank you
@PhilippDominicSiedler
@PhilippDominicSiedler Год назад
No worries :)
@cookiejar3094
@cookiejar3094 10 месяцев назад
Hello I am getting an error while doing pip3 install -e ./ml-agents-envs note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for numpy Failed to build numpy ERROR: Could not build wheels for numpy, which is required to install pyproject.toml-based projects
@XxfaroosxX
@XxfaroosxX 9 месяцев назад
did you find a solution for this?
@cookiejar3094
@cookiejar3094 9 месяцев назад
Hello, I think I pulled the develop branch, not the release21 one and it worked@@XxfaroosxX
@МихайлоДвалі
@МихайлоДвалі Год назад
Do you need to download any python versions for this?
@PhilippDominicSiedler
@PhilippDominicSiedler Год назад
If you install anaconda you won't need it as it comes with python 3.9
@insertedfailed3586
@insertedfailed3586 Год назад
How do we stop the training properly at 9:12?
@PhilippDominicSiedler
@PhilippDominicSiedler Год назад
Wait until the maximum steps, defined in the config, have been reached
@insertedfailed3586
@insertedfailed3586 Год назад
Would the config be specified in the anaconda command prompt? Edit: The training has stopped, thank you for this tutorial!
@PhilippDominicSiedler
@PhilippDominicSiedler Год назад
@@insertedfailed3586 at 7:00 on the right side you can see the config files that exist, we are using PushBlock.yaml in there you can find a parameter that is called "max_steps", should be all the way at the bottom. Let me know if that helps.
@L3O_chop
@L3O_chop Год назад
im tryna do that for more than 10 hours, 2 different types of tutorials and a LOT of github tips
@PhilippDominicSiedler
@PhilippDominicSiedler Год назад
an now what? is it working?
@alqpskwod007
@alqpskwod007 Год назад
It didn't work for me :(
@matthewacevedo21
@matthewacevedo21 Год назад
I can’t import it for some reason 💀
@PhilippDominicSiedler
@PhilippDominicSiedler Год назад
what can you not import?
@matthewacevedo21
@matthewacevedo21 Год назад
@@PhilippDominicSiedler figured it out thank you!
@felixfontain6423
@felixfontain6423 Год назад
If i wanna start it with "mlagents-learn config/ppo/PushBlock.yaml --run-id=push_block_test_01" this Message came ever "The "mlagents-learn" command is either misspelled or could not be found." I need help
@PhilippDominicSiedler
@PhilippDominicSiedler Год назад
It sounds like anaconda virtual environment has not been installed properly, check this step again: 1:00 - Setup Anaconda Environment
@felixfontain6423
@felixfontain6423 Год назад
@@PhilippDominicSiedler I fixt it with switching to Windows 11
@leolee4057
@leolee4057 Год назад
I am getting error too.OTL (mlagents_r19_YT) C:\Users\aron1\OneDrive\바탕 화면\ml-agents-release_19_YT\ml-agents-release_19>mlagents-learn config\ppo\PushBlock.yaml --run-id=push_block_test_01 Traceback (most recent call last): File "C:\Users\aron1\anaconda3\envs\mlagents_r19_YT\lib unpy.py", line 196, in _run_module_as_main return _run_code(code, main_globals, None, File "C:\Users\aron1\anaconda3\envs\mlagents_r19_YT\lib unpy.py", line 86, in _run_code exec(code, run_globals) File "C:\Users\aron1\anaconda3\envs\mlagents_r19_YT\Scripts\mlagents-learn.exe\__main__.py", line 4, in File "C:\Users\aron1\anaconda3\envs\mlagents_r19_YT\lib\site-packages\mlagents\trainers\learn.py", line 2, in from mlagents import torch_utils File "C:\Users\aron1\anaconda3\envs\mlagents_r19_YT\lib\site-packages\mlagents\torch_utils\__init__.py", line 1, in from mlagents.torch_utils.torch import torch as torch # noqa File "C:\Users\aron1\anaconda3\envs\mlagents_r19_YT\lib\site-packages\mlagents\torch_utils\torch.py", line 6, in from mlagents.trainers.settings import TorchSettings File "C:\Users\aron1\anaconda3\envs\mlagents_r19_YT\lib\site-packages\mlagents\trainers\settings.py", line 626, in class TrainerSettings(ExportableSettings): File "C:\Users\aron1\anaconda3\envs\mlagents_r19_YT\lib\site-packages\mlagents\trainers\settings.py", line 649, in TrainerSettings cattr.register_structure_hook( File "C:\Users\aron1\anaconda3\envs\mlagents_r19_YT\lib\site-packages\cattr\converters.py", line 269, in register_structure_hook self._structure_func.register_cls_list([(cl, func)]) File "C:\Users\aron1\anaconda3\envs\mlagents_r19_YT\lib\site-packages\cattr\dispatch.py", line 57, in register_cls_list self._single_dispatch.register(cls, handler) File "C:\Users\aron1\anaconda3\envs\mlagents_r19_YT\lib\functools.py", line 856, in register raise TypeError( TypeError: Invalid first argument to `register()`. typing.Dict[mlagents.trainers.settings.RewardSignalType, mlagents.trainers.settings.RewardSignalSettings] is not a class.
@skaimbauer5556
@skaimbauer5556 Год назад
@everyone !!very important!! date: 11 may 2023 you need to downgrade protobuf and numby. commands: pip install protobuf==3.19.6 pip install numby ==1.21.5 EDIT: I have one question: which onnx version do you use. when the training is finished It's a bit much and in german but further it said that i need a newer onnx version and then this happened [INFO] PushBlock. Step: 240000. Time Elapsed: 219.684 s. Mean Reward: 4.848. Std of Reward: 0.804. Training. [WARNING] Restarting worker[0] after 'Communicator has exited.' [INFO] Listening on port 5004. Start training by pressing the Play button in the Unity Editor. ============= Diagnostic Run torch.onnx.export version 2.0.1+cu118 ============= verbose: False, log level: Level.ERROR ======================= 0 NONE 0 NOTE 0 WARNING 0 ERROR ======================== [INFO] Exported results\push_block_test02\PushBlock\PushBlock-279713.onnx [INFO] Copied results\push_block_test02\PushBlock\PushBlock-279713.onnx to results\push_block_test02\PushBlock.onnx. Traceback (most recent call last): File "C:\Users\samue\anaconda3\envs\mlagents_R19\Scripts\mlagents-learn-script.py", line 33, in sys.exit(load_entry_point('mlagents', 'console_scripts', 'mlagents-learn')()) File "c:\users\samue\desktop\ml-agents-release_19\ml-agents\mlagents\trainers\learn.py", line 260, in main run_cli(parse_command_line()) File "c:\users\samue\desktop\ml-agents-release_19\ml-agents\mlagents\trainers\learn.py", line 256, in run_cli run_training(run_seed, options, num_areas) File "c:\users\samue\desktop\ml-agents-release_19\ml-agents\mlagents\trainers\learn.py", line 132, in run_training tc.start_learning(env_manager) File "c:\users\samue\desktop\ml-agents-release_19\ml-agents-envs\mlagents_envs\timers.py", line 305, in wrapped return func(*args, **kwargs) File "c:\users\samue\desktop\ml-agents-release_19\ml-agents\mlagents\trainers\trainer_controller.py", line 176, in start_learning n_steps = self.advance(env_manager) File "c:\users\samue\desktop\ml-agents-release_19\ml-agents-envs\mlagents_envs\timers.py", line 305, in wrapped return func(*args, **kwargs) File "c:\users\samue\desktop\ml-agents-release_19\ml-agents\mlagents\trainers\trainer_controller.py", line 234, in advance new_step_infos = env_manager.get_steps() File "c:\users\samue\desktop\ml-agents-release_19\ml-agents\mlagents\trainers\env_manager.py", line 124, in get_steps new_step_infos = self._step() File "c:\users\samue\desktop\ml-agents-release_19\ml-agents\mlagents\trainers\subprocess_env_manager.py", line 420, in _step self._restart_failed_workers(step) File "c:\users\samue\desktop\ml-agents-release_19\ml-agents\mlagents\trainers\subprocess_env_manager.py", line 328, in _restart_failed_workers self.reset(self.env_parameters) File "c:\users\samue\desktop\ml-agents-release_19\ml-agents\mlagents\trainers\env_manager.py", line 68, in reset self.first_step_infos = self._reset_env(config) File "c:\users\samue\desktop\ml-agents-release_19\ml-agents\mlagents\trainers\subprocess_env_manager.py", line 446, in _reset_env ew.previous_step = EnvironmentStep(ew.recv().payload, ew.worker_id, {}, {}) File "c:\users\samue\desktop\ml-agents-release_19\ml-agents\mlagents\trainers\subprocess_env_manager.py", line 101, in recv raise env_exception mlagents_envs.exception.UnityTimeOutException: The Unity environment took too long to respond. Make sure that : The environment does not need user interaction to launch The Agents' Behavior Parameters > Behavior Type is set to "Default" The environment and the Python interface have compatible versions. If you're running on a headless server without graphics support, turn off display by either passing --no-graphics option or build your Unity executable as server build.
@YuriyMilov
@YuriyMilov 8 месяцев назад
numpy - not numby pip install numpy==1.21.5
@THExRISER
@THExRISER Год назад
*Hi, everything seems to be working as intended, thank you, I haven't started training yet, but after testing, right after it tells me results have been exported and copied I get the following traceback:* Traceback (most recent call last): File "C:\Users\Rabiie\.conda\envs\mlagents_r19_YT\Scripts\mlagents-learn-script.py", line 33, in sys.exit(load_entry_point('mlagents', 'console_scripts', 'mlagents-learn')()) File "c:\users\Riser\desktop\ml-agents-release_19\ml-agents-release_19\ml-agents\mlagents\trainers\learn.py", line 260, in main run_cli(parse_command_line()) File "c:\users\Riser\desktop\ml-agents-release_19\ml-agents-release_19\ml-agents\mlagents\trainers\learn.py", line 256, in run_cli run_training(run_seed, options, num_areas) File "c:\users\Riser\desktop\ml-agents-release_19\ml-agents-release_19\ml-agents\mlagents\trainers\learn.py", line 132, in run_training tc.start_learning(env_manager) File "c:\users\Riser\desktop\ml-agents-release_19\ml-agents-release_19\ml-agents-envs\mlagents_envs\timers.py", line 305, in wrapped return func(*args, **kwargs) File "c:\users\Riser\desktop\ml-agents-release_19\ml-agents-release_19\ml-agents\mlagents\trainers\trainer_controller.py", line 176, in start_learning n_steps = self.advance(env_manager) File "c:\users\Riser\desktop\ml-agents-release_19\ml-agents-release_19\ml-agents-envs\mlagents_envs\timers.py", line 305, in wrapped return func(*args, **kwargs) File "c:\users\Riser\desktop\ml-agents-release_19\ml-agents-release_19\ml-agents\mlagents\trainers\trainer_controller.py", line 234, in advance new_step_infos = env_manager.get_steps() File "c:\users\Riser\desktop\ml-agents-release_19\ml-agents-release_19\ml-agents\mlagents\trainers\env_manager.py", line 124, in get_steps new_step_infos = self._step() File "c:\users\Riser\desktop\ml-agents-release_19\ml-agents-release_19\ml-agents\mlagents\trainers\subprocess_env_manager.py", line 420, in _step self._restart_failed_workers(step) File "c:\users\Riser\desktop\ml-agents-release_19\ml-agents-release_19\ml-agents\mlagents\trainers\subprocess_env_manager.py", line 328, in _restart_failed_workers self.reset(self.env_parameters) File "c:\users\Riser\desktop\ml-agents-release_19\ml-agents-release_19\ml-agents\mlagents\trainers\env_manager.py", line 68, in reset self.first_step_infos = self._reset_env(config) File "c:\users\Riser\desktop\ml-agents-release_19\ml-agents-release_19\ml-agents\mlagents\trainers\subprocess_env_manager.py", line 446, in _reset_env ew.previous_step = EnvironmentStep(ew.recv().payload, ew.worker_id, {}, {}) File "c:\users\Riser\desktop\ml-agents-release_19\ml-agents-release_19\ml-agents\mlagents\trainers\subprocess_env_manager.py", line 101, in recv raise env_exception mlagents_envs.exception.UnityTimeOutException: The Unity environment took too long to respond. Make sure that : The environment does not need user interaction to launch The Agents' Behavior Parameters > Behavior Type is set to "Default" The environment and the Python interface have compatible versions. If you're running on a headless server without graphics support, turn off display by either passing --no-graphics option or build your Unity executable as server build. *This doesn't appear in your video, should I be worried?* *EDIT: I feel I should add I start learning by typing: **_"mlagents-learn --force"_** as I'm not using a Unity example in an empty project, but already have a project set up that I integrated MLAgents to.*
@PhilippDominicSiedler
@PhilippDominicSiedler Год назад
You need to make sure that all agents behaviours are set to Default, is that the case?
@THExRISER
@THExRISER Год назад
@@PhilippDominicSiedler It is, I have a single agent and a training environment inside a prefab, I have a script that instantiates 10 of it on the scene inside the Awake() function. So is there an issue? Or is that traceback just there for information?
@PhilippDominicSiedler
@PhilippDominicSiedler Год назад
@@THExRISER ah yeah, that’s a difficult one, so the way I did that in the past is instantiate them with their parent gameObject disabled, and once all agents are instantiated set them enabled
@THExRISER
@THExRISER Год назад
@@PhilippDominicSiedler TLDR; The traceback only appears if MLAgents listens in but the training never starts, how do I interrupt training from Anaconda? While I'm at it? Should I be worried about that UserWarning we're both getting? Or is that just performance related? *Long Version:* I believe you mean SetActive, ok, I did that, no difference, traceback is still there, but I found something odd: I interrupt training by hitting the play button on Unity (because I don't know how else to do that and CTR-C only works when it's listening and not training), when I do I get the following: ** [WARNING] Restarting worker[0] after 'Communicator has exited.' [INFO] Listening on port 5004. Start training by pressing the Play button in the Unity Editor. ** It tries to listen for a bit longer before exporting and copying results, if I do hit CTRL-C while it's listening in, the traceback doesn't appear after copying and exporting results, if I don't do anything and wait instead, it does. I think the problem may be in the way I'm interrupting training, I saw/heard you hit some keys in the video to interrupt training, without touching the unity interface, from what I can find it's supposed to be CTRL-C but it doesn't work for me during training, only when it's listening, odd.
@PhilippDominicSiedler
@PhilippDominicSiedler Год назад
@@THExRISER I use ctrl+c, correct!
Далее
How to use Machine Learning AI in Unity! (ML-Agents)
44:51
How to install Unity ML Agents Release 20 in 2023
17:16
I Made the Same Game in 8 Engines
12:34
Просмотров 4,1 млн
Why I Started Game Dev In My Late 30s
7:32
Просмотров 24 тыс.
Unity3D+ML-Agents Custom Algorithms and Environments
22:15
AI Boxing Got me Wheezing
15:06
Просмотров 319 тыс.
Unity ML-Agents 1.0 - Training your first A.I
11:55
Просмотров 114 тыс.
10 Minutes vs. 10 Years of Animation
19:29
Просмотров 1 млн
Can You Beat Road 96 WITHOUT Hitchhiking?
13:46
Просмотров 782 тыс.
How to use Unity ML Agents in 2024! ML Agents 2.0.1
42:10