Тёмный

How to use Machine Learning AI Vision with Unity ML-Agents! 

Code Monkey
Подписаться 533 тыс.
Просмотров 45 тыс.
50% 1

✅ Get the Project files and Utilities at unitycodemonkey.com/video.php...
📦 Unity Machine Learning Playlist: • Machine Learning AI in...
🌍 Get my Complete Courses! ✅ unitycodemonkey.com/courses
👍 Learn to make awesome games step-by-step from start to finish.
Let's learn how to use the Camera Sensor with Unity ML-Agents!
Patreon Sponsor: XR Bootcamp
xrbootcamp.com/deep-dive-imme...
Use coupon CM10 for a 10% discount on this highly recommended Master Class.
How to use Machine Learning AI in Unity (ML-Agents) • How to use Machine Lea...
Teach your AI! Imitation Learning with Unity ML-Agents! • Teach your AI! Imitati...
AI Learns to play Flappy Bird • AI Learns to play Flap...
AI Learns to Drive a Car • AI Learns to Drive a C...
Simple Bundle - Complete Collection
assetstore.unity.com/packages...
Simple Farm Animals - Cartoon Assets
assetstore.unity.com/packages...
🌍 Get Code Monkey on Steam!
👍 Interactive Tutorials, Complete Games and More!
✅ store.steampowered.com/app/12...
If you have any questions post them in the comments and I'll do my best to answer them.
🔔 Subscribe for more Unity Tutorials / @codemonkeyunity
See you next time!
📍 Support on Patreon / unitycodemonkey
🤖 Join the Community Discord / discord
📦 Grab the Game Bundle at unitycodemonkey.com/gameBundl...
📝 Get the Code Monkey Utilities at unitycodemonkey.com/utils.php
#unitytutorial #unity3d #unity2d
--------------------------------------------------------------------
Hello and welcome, I am your Code Monkey and here you will learn everything about Game Development in Unity 2D using C#.
I've been developing games for several years with 7 published games on Steam and now I'm sharing my knowledge to help you on your own game development journey.
You can see my games at www.endlessloopstudios.com
--------------------------------------------------------------------
- Website: unitycodemonkey.com/
- Twitter: / unitycodemonkey
- Facebook: / unitycodemonkey

Опубликовано:

 

26 июл 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 97   
@CodeMonkeyUnity
@CodeMonkeyUnity 3 года назад
🌐 Have you found the videos Helpful and Valuable? ❤️ Get my Courses unitycodemonkey.com/courses or Support on Patreon www.patreon.com/unitycodemonkey 📦 Unity Machine Learning Playlist: ru-vid.com/group/PLzDRvYVwl53vehwiN_odYJkPBzcqFw110
@PolymathAtif
@PolymathAtif 3 года назад
You are the new Brackeys my man
@PolymathAtif
@PolymathAtif 3 года назад
After he left RU-vid I was so unmotivated but you and Fat Dino kept me going
@sucukluomlet4665
@sucukluomlet4665 3 года назад
When I run any ML-agent project, I see this error. Missing Profiler.EndSample (BeginSample and EndSample count must match): ApplyTensors Previous 5 samples: GC.Alloc ApplyTensors GC.Alloc Barracuda.PeekOutput FetchBarracudaOutputs In the scope: Can someone help
@_VeonAlmeida
@_VeonAlmeida Год назад
hi can i train it for a push up counter im thinking training it for detecting a proper push..can unity ml agents help in this
@guridoraccoon6375
@guridoraccoon6375 3 года назад
Always a delight to see u posting
@CodeMonkeyUnity
@CodeMonkeyUnity 3 года назад
💬 The Machines have EYES! Adding vision to your AI is surprisingly easy although comes at the cost of much longer training times. I've already covered quite a few of the base mechanics of ML-Agents, what example projects would you like to see?
@meowththatsright7881
@meowththatsright7881 3 года назад
Tactical shooter ai with machine learning like sebastian did? But he didn't show us a tutorial
@stalinthomas9850
@stalinthomas9850 3 года назад
An RTS game that the mlagent must learn to play?
@v1rusAnon
@v1rusAnon Год назад
can tyou put camera to the 2d character itself? in 2d game
@tudormuntean3299
@tudormuntean3299 3 года назад
Been trying to find something like this for ages. Thanks!
@enescaglar7326
@enescaglar7326 3 года назад
when you do something, you do the best. These are best AI tutorial series on the Internet :)
@CodeMonkeyUnity
@CodeMonkeyUnity 3 года назад
Glad you like them!
@SangeetaYadav-pv8og
@SangeetaYadav-pv8og 3 года назад
I was looking for it thanks dude.......
@RavenMinis
@RavenMinis 3 года назад
Dude, I love your content so much
@allaboutpaw9396
@allaboutpaw9396 3 года назад
Thank you for your contribution. I really appreciate your videos.
@leonardo6631
@leonardo6631 2 года назад
thank you so much for the great video!
@valor36az
@valor36az Год назад
Great tutorial
@PolymathAtif
@PolymathAtif 3 года назад
You are a life saver ☺️💜👍
@PolymathAtif
@PolymathAtif 3 года назад
He ❤️d my comment I am so happy
@xetra1155
@xetra1155 Год назад
AWESOME VIDEO GOD DAMN IT
@mypaxa003
@mypaxa003 3 года назад
This is crazy. Why im never thought about simplifying image before feeding it to AI?) Really cool to see combination of two AIs.
@mypaxa003
@mypaxa003 3 года назад
Please make a video about drawing gizmos. Really interested how to draw transform gizmos without object (like for changing vertex position in ProBuilder).
@vo_sk
@vo_sk Год назад
Thank you so much for this AI playlist, it`s very inspiring! Are you going to make some new videos on this topic in 2022? It would be so amazing :) Thanks again!
@CodeMonkeyUnity
@CodeMonkeyUnity Год назад
Yup I'd love to revisit ML sometime in the future to see what's changed, just need to find the time
@RockoShaw
@RockoShaw 2 года назад
First of all thanks for your awesome tutorials. So a Vision sensor acts as Action sender and it works for learning. So if I wanted to allocate inventory to orders would inventory be an Action in some form of discrete numbers?
@Diego0wnz
@Diego0wnz 3 года назад
Damn this combined with the yolo object detection algorithm could be really fucking cool for real life applications
@nickgennady
@nickgennady 3 года назад
For grayscale you can have different gray values for different objects?
@roshanthapa1297
@roshanthapa1297 3 года назад
Debug.Log("Awesome"); 😂 Now, we are definitely going for AI, not only in the robot and vehicles but also in game dev it's really something that have huge potential for Future world, every time improvement brings out absolute 🔥🔥 results.
@Build_the_Future
@Build_the_Future 2 года назад
How do I use the --initialize-from tag to start training from a older step count?
@user-js3ze4lc4v
@user-js3ze4lc4v 6 месяцев назад
for Human body tracking (and placing an skeletal over the body)in AR for android how can i create this application, can you share your thought on this
@v1rusAnon
@v1rusAnon Год назад
how to attach the camera to the agent itself but on 2d game?
@v1rusAnon
@v1rusAnon Год назад
how to put and what other ways to put ml agents to the 2d character itself
@Annin_Mochineko
@Annin_Mochineko 11 месяцев назад
Is it possible to simplified the input image into multi colors? I want to use it for robot navigation, simplyfied the cam images into target, obstacle, and background. So that when using the model in real life, I can use image segmentation to input simplyfied images without actually considering the complex environment when creating the training environment. (Sorry for my crappy English since it's not my first language.)
@DavidLeahy100
@DavidLeahy100 Год назад
I don't see any VisionCamera Culling Layer ?? in my Unity 2019.4.13f
@jewelthomas7086
@jewelthomas7086 Год назад
How to make an AI in car game where the AI can reverse the car when it is stuck with an obstacle in front and get back to track by reversing? Any idea?
@WorldEnder
@WorldEnder 3 года назад
for the 3d example, could it be considered a simplified way to use a top-down camera and teaching the AI in 2d?
@CodeMonkeyUnity
@CodeMonkeyUnity 3 года назад
You could indeed have a top down camera and feed that to the AI along with actions to move and it would learn to get to an animal and identify it. However that would require a massive amount of training time, something like 100+ Million steps.
@dnscdnsc6121
@dnscdnsc6121 3 года назад
Hi sir, new subs here. I was just wondering how to reference Gameobject to another scene?
@pixboi
@pixboi Год назад
Do you know if there is a way of writing texture yourself without a camera? Would be useful for situations where you have a class presentation of tiles for example, and you want to input the surrounding of the agent without a camera.
@CodeMonkeyUnity
@CodeMonkeyUnity Год назад
You can manually write to a texture with SetColor(); I used something like that here unitycodemonkey.com/video.php?v=Xss4__kgYiY unitycodemonkey.com/video.php?v=ZRRc7J-OwGo
@pixboi
@pixboi Год назад
@@CodeMonkeyUnity sorry I mean as in mlagents context. I found out you can make your own sensor and write stuff in it with ObservationWriter
@bsdrago
@bsdrago 3 года назад
I am always commenting here how much I like these videos and I want a COURSE =) But I have a doubt: it seems correct to think that, if I have appropriate "sensors / data", I can train an AI to be unbeatable in a game. But we are talking about games and this is for people. It is frustrating to play a game and you always lose. How do I create artificial stupidity? =)People have to beat the computer otherwise the game is bad =)
@CodeMonkeyUnity
@CodeMonkeyUnity 3 года назад
When you're training the AI it will periodically store checkpoint brains. So you could train your AI to be superhuman, then go back and use the brain from a few checkpoints before that point. Making the AI dumber is definitely an interesting topic, as you said it's no fun fighting against something and always losing.
@bsdrago
@bsdrago 3 года назад
@@CodeMonkeyUnity I understand about checkpoints, but my point is how to create something that is not "offensive" to the player. An AI that hits 100% of the time is just as frustrating as an AI at an earlier AI checkpoint and that reaches half a meter from the player and "shoots up" (stupidly missed the shot). It seems to me that the good balance of the game is to find a point where the AI does not hit 100% of the time at any time, or to make a stupid mistake at a point where no one would. I think this discussion is fantastic and I am concerned with the construction of these sensors to have a good gameplay. Maybe missing half a meter is something that is not even expected from the computer, but ok if he misses a sniper shot 200m away =)
@saifkhaled1914
@saifkhaled1914 Год назад
Core i7 4810 MQ 2.8GHz 3.8GHz with turbo boost , is it enough to do training AI ?
@bikram2955
@bikram2955 Год назад
I just wanted to ask is it possible to use first person view from environment like self-driving car instead of ray casting? I wanted to train in a way that can be easily transferrable to other domain and eventually in real-world. Have you tried training with first person RBG-D view from car or similar example?
@CodeMonkeyUnity
@CodeMonkeyUnity Год назад
Sure that would work, but would make the model much more complex and be much more difficult to train as opposed to a handful of raycasts. As long as the visuals are somewhat realistic, or the real world car has some common virtual shader then learning in the virtual world should work for applying it to the real world.
@bikram2955
@bikram2955 Год назад
@@CodeMonkeyUnity The objective of my work is just to explore in a way that it tries to explore new places. I am thinking to use curiosity driven reinforcement learning. However, the challenge as I described is sim-to-real. I would need a input commonality between unity and realworld. One way I am thinking is to use SIFT feature of visual that will contain the descriptor of image rather than dense representation. Even though, I think I need to make environment that looks like real world. Do you know or have you used a real world captured mesh into unity? Can you point me to right direction?
@ahmedelborki5982
@ahmedelborki5982 3 года назад
I like your videos especially ML agents. i have a question: I am currently working on endless runner ml agents. but the agent keeps losing. do you have any advice on how to make work better? thanks my regards
@CodeMonkeyUnity
@CodeMonkeyUnity 3 года назад
It is all a matter of designing your rewards correctly and letting it train a lot. In your case, give it some observations based on where the platforms are and give it a reward based on distance.
@Hellomyfriendlyfriends
@Hellomyfriendlyfriends 3 года назад
When will you have a new course on Udemy ?? Loved the tower defense thx
@CodeMonkeyUnity
@CodeMonkeyUnity 3 года назад
Not sure, right now I want to focus on the videos so maybe in 2-3 months. I'm glad you liked the course!
@In-N-Out333
@In-N-Out333 3 года назад
How come in this project you didn’t create multiple environments to train in parallel to each other?
@kamillatocha
@kamillatocha 3 года назад
how does it know the animals position to move to it and rotate so its facing it
@CodeMonkeyUnity
@CodeMonkeyUnity 3 года назад
That is handled through classic AI, just a simple list of all the animals and a simple mover script.
@RockoShaw
@RockoShaw 2 года назад
I also have one question, in the Pig/Sheep example you added the CameraSensor. What for? on Action Received you check if it's either a Sheep or a Pig based on the last transform check.
@CodeMonkeyUnity
@CodeMonkeyUnity 2 года назад
It's just to showcase how the AI vision works, if you had this exact specific scenario, organizing sheep and pigs then using ML is overkill
@RockoShaw
@RockoShaw 2 года назад
​@@CodeMonkeyUnity Thanks a lot for your response. Asking for learning purposes, but thanks for clarification. I have a question regarding the Camera Sensor. You set the width and height to 20 and 20 respectively and it matches the discrete branch size. When you get an action after a decision request, what does the discrete branch contains? The position of a non black pixel? or what exactly will it return? What if you wanted the colors it sees in RGB? Also, this 20x20 size does that mean everything the camera sees that is attached to this CameraSensor will be scaled/resized to 20x20 pixels? You also mention that CameraSensor being 20x20 in Grayscale would be 400 observations, but OnActionReceived is called only 17 times when I call RequestDecision. I might be confusing observation to actions but if Observations are like the VectorObservations you showed in other videos, I would expect VectorObservation Space Size to be larger than 0 in the duck shooting example, or is the Camera Sensor's observation bundled inside the MLAgents framework?
@mohammedabdelsalam1010
@mohammedabdelsalam1010 3 года назад
Can you make about ar technology?
@CodeMonkeyUnity
@CodeMonkeyUnity 3 года назад
AR is an interesting topic that I'd love to research at some point, just don't know when
@mohammedabdelsalam1010
@mohammedabdelsalam1010 3 года назад
@@CodeMonkeyUnity God willing, soon 😉
@Shakor77
@Shakor77 7 месяцев назад
Is it possible to use ML AI to have it train over a gaming session rather than train it "offline" and then load the trained model? Reason is that I want to see if you can us ML AI to learn to play against players rather than against the environment.
@CodeMonkeyUnity
@CodeMonkeyUnity 7 месяцев назад
As far as I know there is no way to train a model outside of the Unity Editor, so you can't train it in a build Maybe you could look into other methods for training that are non-Unity and then perhaps dynamically load the new trained model and use that?
@sinner1263
@sinner1263 3 года назад
I watched 3:34, 0:30, 4:32, 0:15 of ads without watching the video. How? Im playing the video while Im eating. LOL
@r1pfake521
@r1pfake521 3 года назад
What do you mean?
@octanios540
@octanios540 3 года назад
Can do a turturial of how to make different characters have different abilities like apex legends
@rickybloss8537
@rickybloss8537 3 года назад
You know your basically asking him to make your game for you.
@r1pfake521
@r1pfake521 3 года назад
Implement your abilities as ScriptableObject then just make different character prefabs and drag&drop the abilities you want to the character prefabs. There are multiple tutorials about exactly this topic, even a official one from Unity themself about character selection and ability system.
@octanios540
@octanios540 3 года назад
@@r1pfake521 thank you
@skinnyboystudios9722
@skinnyboystudios9722 3 года назад
Can you do video on classic AI vs Reinforcement learning?
@CodeMonkeyUnity
@CodeMonkeyUnity 3 года назад
It's tricky to do a general video because it's all highly dependent on what you want to do. Some things are easier to do with ML, some easier with Classic AI.
@skinnyboystudios9722
@skinnyboystudios9722 3 года назад
@@CodeMonkeyUnity Yes the video can be comparing different classic vs ml methods. When to use one over the other. Just explanations not necessarily a tutorial.
@crazyfox55
@crazyfox55 3 года назад
Why use ML-Agents instead of just using the birds transform? It doesn't seem like there is anything for the AI to learn, when the solution is just faster and more accurate if its hand coded here. Are there better examples that I'm just not seeing?
@CodeMonkeyUnity
@CodeMonkeyUnity 3 года назад
Yeah it's just a demo to showcase how to use vision with ML-Agents. If my goal was just to make an AI to shoot the bird I would go with Classic AI instead of ML.
@sundarakrishnann8242
@sundarakrishnann8242 3 года назад
The thumbnail is from boss fight? xD
@CodeMonkeyUnity
@CodeMonkeyUnity 3 года назад
Heh it is! I wanted something to showcase "eyes" and that seemed to look good!
@xetra1155
@xetra1155 Год назад
Does sensor work with imitation learning?
@CodeMonkeyUnity
@CodeMonkeyUnity Год назад
Hmm good question, I'm not sure, I covered Imitation learning a long time ago here but don't remember if it would work with a Camera sensor unitycodemonkey.com/video.php?v=supqT7kqpEI
@xetra1155
@xetra1155 Год назад
@@CodeMonkeyUnity Then I guess you know what to do for the next video my friend
@rikrishshrestha5421
@rikrishshrestha5421 3 года назад
i thought camerasensor was coded by him. It was provided by unity in ml-agent pkg.
@CodeMonkeyUnity
@CodeMonkeyUnity 3 года назад
Yes it's one of the built-in sensors
@unoriginalcringygaming3002
@unoriginalcringygaming3002 3 года назад
The machine has found you...
@unoriginalcringygaming3002
@unoriginalcringygaming3002 3 года назад
R
@unoriginalcringygaming3002
@unoriginalcringygaming3002 3 года назад
U
@unoriginalcringygaming3002
@unoriginalcringygaming3002 3 года назад
N
@RoadHater
@RoadHater 2 года назад
Is there any way to have a black and white camera instead of grayscale? Even less observations
@BrainSlugs83
@BrainSlugs83 7 месяцев назад
You would still need one float per pixel, you would just have less usable resolution in each pixel. You can accomplish this though with quantization (i.e. Q2 gives 2 bits per pixel for example; essentially 4 shades of grey, and have roughly 16x the inferencing power).
@myelinsheathxd
@myelinsheathxd 3 года назад
I know it's not easy but I think the developers should try to add a technology that visualize the ML brain to debug and sense how the brain logic is working in our agent!
@CodeMonkeyUnity
@CodeMonkeyUnity 3 года назад
Even if you could visualize it you couldn't possibly understand it. It would simply show hundreds of dots each with a value between -1 and +1. That's too much noise to be able to debug
@myelinsheathxd
@myelinsheathxd 3 года назад
@@CodeMonkeyUnity Yeah, but we need smth else to understand logic creation behavior like in human brain analysis. During fmri scan of human brain, it's easy to feel how basic logic work by observing language areas of the brain. A person will be required to think about one word several times with several conditions, then the data shows that a specific area working consistently. This means brain uses this part of the brain to the specific word, close to this brain area there several words to related to this word like synonymous words, ect. Again, It's not easy,, since I haven't created custom ML algorithm. But we need smth like fmri like technology to understand real time Machine Learning's learning principles.
@nicofacto2424
@nicofacto2424 3 года назад
And of course we will never see the shootTargetEnvironement.cs so .... thx ... @5:02
@CodeMonkeyUnity
@CodeMonkeyUnity 3 года назад
It spawns the prefab and does a raycast. All the code is included in the project files.
@nicofacto2424
@nicofacto2424 3 года назад
@@CodeMonkeyUnity well i got them but ShootTargetEnvironment.cs(32,32): error CS0117: 'UtilsClass' does not contain a definition for 'GetMouseWorldPositionZeroZ', this is the only error for me
Далее
Я ВЕРНУЛСЯ 🔴 | WICSUR #shorts
00:57
Просмотров 2 млн
Она Может Остановить Дождь 😱
00:20
Your bathroom needs this
00:58
Просмотров 14 млн
Steam EXPERT teaches you Game Marketing for SUCCESS!
52:22
AI Learns to Drive a Car! (ML-Agents in Unity)
13:13
Просмотров 73 тыс.
Unity ML-Agents 1.0 - Training your first A.I
11:55
Просмотров 113 тыс.
The PERFECT Pathfinding! (A* Pathfinding Project)
24:59
Я ВЕРНУЛСЯ 🔴 | WICSUR #shorts
00:57
Просмотров 2 млн