Тёмный

Curriculum Learning | Unity ML-Agents 

Immersive Limit
Подписаться 10 тыс.
Просмотров 9 тыс.
50% 1

Опубликовано:

 

27 сен 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 16   
@vildauget
@vildauget 3 года назад
Now we're talking, this agent is getting smart! Interesting to see curriculum implemented, as that got me curious on your previous video. Well explained how to implement too - thank you.
@pulsarhappy7514
@pulsarhappy7514 Год назад
I might be wrong, but in mlagents documentation it is said that curiosity is by default at strength 1, which means in your code you actually reduced curiosity (by setting it to 0.02). Edit: the default value is 1 ONCE you activated curiosity, therefore activating it with 0.02 is a choice (and a good one, because you don't want your agent to rely on curiosity reward once it has discovered how to complete the task you want it to accomplish).
@myelinsheathxd
@myelinsheathxd 3 года назад
Great Job! I am the side of these manually configure of the brain, and then, learn which algorithm works better for future by default! Yeah, after couple of trial and error, we developers can make a system that works like biological brain. Smth like, 1) learn one thing ,2) categorize it, and then 3) learn 2nd different thing, 4)save it in another brain region. So this saves power efficiency by using only specific brains to solve problems I mean smth like, when in math class, we use one specific brain region that relates to science, but when playing football we use completely different brain regions. Amazing content anyway !
@keyhaven8151
@keyhaven8151 3 месяца назад
I have always had a question about mlagents: they randomly select actions at the beginning of training. Can we incorporate human intervention into the training process of mlagents to make them train faster? Is there a corresponding method in mlagents? Looking forward to your answer.
@JayadevHaddadi
@JayadevHaddadi 2 года назад
Wow so cool! You never mentioned if you used Raycasting or Camera to feed the network. Also what I would think would be interesting is training it as you have done and then giving the finished network a slightly different problem to see if it has only adapted to your problem or if it has some flexibility... i think that would be the most interesting, to see how to curve it to become more of a general problem solver rather than just one problem. Maybe also if ciriculum or each session would be slightly randomize between different problems... would that work? Anyway very nice job! Thanks:)
@ImmersiveLimit
@ImmersiveLimit 2 года назад
It’s using a camera. I agree, it would be cool to expand so some randomized environments. I guess I got distracted with another project and didn’t take it further. Maybe one day!
@ВладимирСоколов-р4ж
hi! a very cool channel. thank you very much! Where can I find out how to animate an fbx character? I tried using the crawler/worm/walker example to animate fbx, threw rigidbody and joins there, but still physics does not want to work correctly. Have you found an example of how to do it correctly?
@captainvaughn5692
@captainvaughn5692 3 года назад
Hey! Very cool channel! Could you make new versions of your "Animation from Blender to Unity" and "blend tree" videos? This would help me and also a lot of other people, cause anomation is one of the hardest things to learn in the beginning.
@ImmersiveLimit
@ImmersiveLimit 3 года назад
Does it no longer work? I think I rewatched it and followed the same steps a couple months ago.
@captainvaughn5692
@captainvaughn5692 3 года назад
@@ImmersiveLimit i mean i only tried the command out because a guy in the commands said that it would fix all bugs. The command still exists, but is not working
@WAMBHSE
@WAMBHSE 2 года назад
Hey, I had a question about how the curriculum training works. In a yaml file where you would put your "curriculum:" under the "environment_parameters" heading. How would one handle multiple environment parameters? For an example, in this AdvancedCollector.yaml you have an environment parameter for block_offset: but what if you wanted to include one for coin_amount_spawned, and coin_spawn_position. as an example. do you include multiple environment parameter blocks each with it's own completion criteria, values, lessons, etc, or do you just have one environment parameter in which a script in your scene would take the singular value supplied in the one environment parameters/lesson and use that value to mean different things. i.e. via a script in the scene, if lesson 01 let block move 0 to 1, spawn 5 coins, in spawn location pattern B (as an example). But If you can use multiple environment parameter blocks. Then how would that look like in the .yaml file? could you give an example of what it might look like? Thanks for the help.
@ImmersiveLimit
@ImmersiveLimit 2 года назад
Check out the official Unity ML-Agents GitHub repo. They have some examples that should help!
@jamesc2327
@jamesc2327 Год назад
Just found you, any more AI and Unity stuff in the works?
@ImmersiveLimit
@ImmersiveLimit Год назад
Honestly haven’t had time for anything lately unfortunately. Too busy with work. Maybe will return to it at some point
@melrellid
@melrellid 3 года назад
Where in your Code do you call resetarena()?
@ImmersiveLimit
@ImmersiveLimit 3 года назад
Sorry, I just realized I didn't include the Agent code in the tutorial! I put it in OnEpisodeBegin() and you can now see the source code at www.immersivelimit.com/tutorials/ml-agents-platformer-curriculum-training-for-an-advanced-puzzle
Далее
Camera Vision | Unity ML-Agents
22:45
Просмотров 11 тыс.
+1000 Aura For This Save! 🥵
00:19
Просмотров 9 млн
Setting up ML Agents for Unity in 5 Minutes
5:41
Просмотров 8 тыс.
Unity ML-Agents Release 10, Beginner Walkthrough
41:07
AI Learns to Walk (deep reinforcement learning)
8:40
Custom Environment In Unity3D/ML-Agents V2.0
51:18
Просмотров 3,6 тыс.
How to use ChatGPT in Unity - Simple Tutorial
35:59
Просмотров 49 тыс.
How to use Unity ML Agents in 2024! ML Agents 2.0.1
42:10