Тёмный

Camera Vision | Unity ML-Agents 

Immersive Limit
Подписаться 10 тыс.
Просмотров 12 тыс.
50% 1

Опубликовано:

 

6 окт 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 40   
@leonardp8517
@leonardp8517 3 года назад
ooh exactly what I was hoping for
@ImmersiveLimit
@ImmersiveLimit 3 года назад
Thank you! I’ll try to keep including helpful info like that.
@cem_kaya
@cem_kaya Год назад
Thank you!
@FuZZbaLLbee
@FuZZbaLLbee 3 года назад
Nice to see the visual perception being used. Maybe add a penalty for rotating from left to right to make the behavior more smooth.
@ImmersiveLimit
@ImmersiveLimit 3 года назад
That’s a tough one, hard not to accidentally penalize all rotations, some of which are necessary.
@crazyfox55
@crazyfox55 3 года назад
I think its just turning left and right because its body covers the coin. I think if the cactus was invisible it wouldn't turn so much.
@myelinsheathxd
@myelinsheathxd 3 года назад
Great job! So far I didn't interested to train any Agent because of those PC games and its AI are not challenging for me. Now I got my mobile VR. I found all of these ML agent logic fits perfectly for "all VR interactive games and their AI". Mainly, its easy to feel how agents interacts with you while you are interacting in real time in VR.
@medhavimonish41
@medhavimonish41 3 года назад
please when done post the github link here. i also wanted to get my hands on AR and VR or just give your github link so i can follow you
@SamDutter
@SamDutter 3 года назад
This is exactly what has brought me here as well!
@cem_kaya
@cem_kaya Год назад
how would i specify my own NN architecture instead of using the basics in the config ? For example how would i define the network in pytorch and make the agent optimize that ?
@HoneyTribeStudios
@HoneyTribeStudios 3 года назад
Nice tutorial, thanks for making it
@codesmells4449
@codesmells4449 3 года назад
Great video, it helped out a lot!
@MaxIme555
@MaxIme555 Месяц назад
at 11:00 there is no errors and it's not moving because you're in heuristics mode :) (edit: seems like you figured it out at 15:00 )
@JonatanGlader
@JonatanGlader Год назад
If you add some slowdown to the jump, and add the speed it takes from spawning -> coin collected. Would it then maybe learn not to rotate this much, and also only jump when it's needed?
@gun645
@gun645 3 года назад
Hey thank for the video. Can I have a camera not on the character but as a static entity like a roadside camera observing the environment Use the info from this camera to control the character
@ImmersiveLimit
@ImmersiveLimit 3 года назад
Yes that would work
@keyhaven8151
@keyhaven8151 3 месяца назад
I have always had a question about mlagents: they randomly select actions at the beginning of training. Can we incorporate human intervention into the training process of mlagents to make them train faster? Is there a corresponding method in mlagents? Looking forward to your answer.
@ely0ryyi748
@ely0ryyi748 2 года назад
Hello, I am developing an agent that learns to move a box near a platform where a coin is placed. The agent must jump on the box and then on the platform to collect the coin. The problem is that in some executions, the agent learns to jump on the box and take the coin but walking backward. How could I penalize these actions so that he learns to pick up the coin walking forward? P.S. Your guide is very nice.
@vildauget
@vildauget 3 года назад
Very nice guide, thank you! My instinctive question as soon as I saw your camera, was that I'd try it first-person view, or a different camera where you choose the player (and any other unneeded objects and effects to be part of a layer that the camera is set to not draw. Also, the second question popping up was that maybe initial training would go faster if the platform and coin was closer to the subject, so it'd bump into them earlier in its random dance? Would it be viable to do random placement during testing, and make it move further away when the agent gets smarter?
@ImmersiveLimit
@ImmersiveLimit 3 года назад
Thanks! Yes, you could try curriculum learning and gradually move the agent farther away. It probably would speed up training a bit.
@maxfun6797
@maxfun6797 Год назад
Does camera output-> target display matter?
@kyleme9697
@kyleme9697 3 года назад
Very interesting. As a person who is new to all of this ... why would I use Camera sensors instead of Ray sensors? Because its more sensitive? Or perhaps because its more 3D?
@ImmersiveLimit
@ImmersiveLimit 3 года назад
More input data for decision making, plus a lot of people are interested in real world applications using cameras. If just designing for games, ray casts are usually easier and better performance.
@kamillatocha
@kamillatocha 3 года назад
exacly what i need for my next project cus rays wont cut it
@alizargarian-oq1gg
@alizargarian-oq1gg Месяц назад
Hey, firstly many thanks for your helpful tutorial. Unfortunately after playing to train I get this error: mlagents_envs.exception.UnityObservationException: Decompressed observation did not have the expected shape - decompressed had (84, 84, 3) but expected [3, 84, 84] I saw in some forum that I should update my ML-agent. But it is updated(3.0.0). Do you had this problem or maybe a solution for me?
@crazyfox55
@crazyfox55 3 года назад
Is there anyway to setup two or more cameras for one agent. I'm curious if I could make a security bot that has multiple security cameras its looking through? I could always have a monitor wall that renders all of the security cameras but that seems like a round about way to get multiple camera inputs into the model. Thanks for the video, it was very helpful.
@ImmersiveLimit
@ImmersiveLimit 3 года назад
Yeah, you could just put two camera sensor components on child cameras. Should work fine, but might slow down training with a lot of cameras at once.
@jamesc2327
@jamesc2327 2 года назад
Hey this is great, is it possible to adapt this to turn based experiments. Such as checkers for example where a single frame can be used rather than a full 30 fps. How would you adapt this to turn based games?
@ImmersiveLimit
@ImmersiveLimit 2 года назад
I think there may be some examples of how to do this in Unity’s ML-Agents repo actually. It probably shouldn’t be much different from the 30 fps version, just have it only make a decision every turn.
@jamesc2327
@jamesc2327 2 года назад
@@ImmersiveLimit ok 🙏 thx for the reply ;)
@leozinho2r
@leozinho2r 2 года назад
Hi! Great tutorial :D I followed it to apply it to my game but for some reason I'm getting an error "NullReferenceException: Object reference not set to an instance of an object Unity.MLAgents.Sensors.CameraSensor.ObservationToTexture (UnityEngine.Camera obsCamera, UnityEngine.Texture2D texture2D, System.Int32 width, System.Int32 height)". I don't understand why/how it's happening and the error message is a bit useless because it doesn't say if the null reference is to the Camera object for Texture2D object... but I have linked a camera to the Camera Sensor and looking into the Unity script where ObservationToTexture is called, it creates a new Texture object in the constructor so I'm really not sure what's happening... Do you have any idea of what may be going on? Thanks in advance and keep up the great work!
@ImmersiveLimit
@ImmersiveLimit 2 года назад
You’ll probably have to debug the code with Visual Studio and figure out what is null
@leozinho2r
@leozinho2r 2 года назад
@@ImmersiveLimit Hi! Thanks for the quick reply :) Problem is, the error is arising in ml-agents' code, not mine, and it seems one cannot edit their scripts (I tried adding debug statements but when I save there's a warning that the script was changed and so it reverts to a cached version instead, undoing my changes) :(
@ImmersiveLimit
@ImmersiveLimit 2 года назад
Another option: You can get the source code from the mlagents github repo, install the package manually using their instructions, and then you will be able to edit.
@chamikagunarathna6575
@chamikagunarathna6575 2 года назад
I was searching for this. Thank you very much for this content!! I also have a question on something. So most of the decisions have taken for a continuous action. Lets say we have to implement a something like a baseball shot, how do we get the timing right?
@ImmersiveLimit
@ImmersiveLimit 2 года назад
I wish I knew. I’ve never managed to get agents working on this kind of task.
@뇽뇽-x6f
@뇽뇽-x6f 3 года назад
Thank you for the guide video. It's really helpful (●'◡'●) I have a question. Could I can use two cameras for ML-agent Learning???
@ImmersiveLimit
@ImmersiveLimit 3 года назад
Thanks! Yes, you should be able to hook up a second camera with a second camera sensor component
@뇽뇽-x6f
@뇽뇽-x6f 3 года назад
@@ImmersiveLimit Ohhhh it works!!! Thank you so much :)
Далее
Curriculum Learning | Unity ML-Agents
18:03
Просмотров 9 тыс.
Women’s Free Kicks + Men’s 😳🚀
00:20
Просмотров 13 млн
Top 25 Computer Vision Projects 2021
6:01
Просмотров 208 тыс.
Unity ML-Agents 1.0+  - Self Play explained
12:46
Просмотров 10 тыс.
10 Minutes vs. 10 Years of Animation
19:29
Просмотров 1 млн
Photogrammetry / NeRF / Gaussian Splatting comparison
23:30
How Computer Vision Works
7:08
Просмотров 319 тыс.
Unity ML-Agents 1.0 - Training your first A.I
11:55
Просмотров 114 тыс.
Improve your Unity A.I. | Configuration
12:31
Просмотров 6 тыс.
How do QR codes work? (I built one myself to find out)
35:13
AI Learns to Drive a Car! (ML-Agents in Unity)
13:13
Просмотров 75 тыс.