I'm creating a Python raytracer that simply uses PIL to generate photo realistic image at AAA pocessing speed. However... looks like I will soon need Ai to create the precalculations since I have to also work on a whole lot of other Python projects by contracts. Basically how it generates the images is by double layering images... the first buffer RGB images are 128x128 tiles PIL blurred at 50 of 9 main colors... and the 2nd buffer is RGBA which is exacly like the first buffer except resized to 32x32 to sharpen the 1st buffer image... each combination pre-rendered to be sorted in a Python dict() to be laid out by the Wave Function Collapse.
Would love to see an updated video where you use a secondary GPU to have both the depth and object detection networks on separate GPU's to fix the reaction time issues or find a light depth network or maybe even make your own as a video then use it in an updated video instead either way really enjoyed this video.
just some ideas for you or anyone that wants to work on this, ultrakill allows you to turn on enemy outlines that can trigger based on distance or you can have them always on, you can even make outlines go through walls but thats probably just gonna confuse the AI. another thing you can do with outlines is making an outline cover an entire enemy so you make the color something really easy for the AI to detect like a bright pink or purple. the problem with that is that the AI cant tell which enemy is which, it can only tell that its an enemy But if you want the AI to be able to recognize each enemy, you can make a custom color palette and enable it in the settings somewhere, custom color palettes let you make every color on every enemy different so you can make every enemy a different single color for the AI to recognize far more easily, the problem with this is that the AI will still get mad at corpses (outlines only show up on alive enemies) I made this comment at 3AM so it probably doesnt read that well but I hope this comment can still help someone that wants to improve the AI
is there no modding API for ultrakill or steam? I mean, surely there's a way to get input from V1's perspective without being JUUUST screenshots, right?
This is almost just V2. Make it learn how the other weapons work, further develop the phases, and this could probably compare to an actual boss in the game.
my friend made an ai that literally could play any game its name was bob bob has been retired and is just gonna be used as a reference to make a new ai sadly bob had a fucking personality
feedback: 1: I added a neon glow to all of the texts, even though yt compression kinda ruined it, does it look good? I like the sharp contrast berween bright text and a black background, but I also really love this subtle bloom. It helps the content stand out, without making the text hard to read. I'm also a big fan of the ASCII progress bar lol. Very cool. 2: I tried to write the script in more of a [to solve this problem, we can do...] format instead of the usual [I solved this problem by doing...]. was this a good decision? Yes, that's it! It's a pretty efficient way of making sure you don't go on for too long on tangents that should become their own video, and helps explain the process of what you're doing to the viewer.
cant you hook into the game's memory and get real game data? then you can train a model based on runtime game data and player inputs. would be interesting to use the scoring system as an actual reward/punishment system, or maybe your own artificial system so it uses weapons in a specific way, to make it not just good at playing but also good at playing cool.
I was almost waiting for a "Directed by 8AAFFF" at the end 😎Production quality is off the charts on your videos (as always), man. And wow; what a project, and open-sourced. 😍 Even sponsored by Boston Dynamics, dude, that's so cool, can't wait for your future projects 👏
I think one of the important steps is to make it see more than 1 frame at a time. Most AI I've seen playing games are basically like having lobotomy and forgetting everything 60 times per second (or whatever the framerate is) There is no object-permanence, no mental objective list, it's basically like someone new walking up to the computer, seeing 1 picture, and deciding what to do before the next person walks in without any communication. An easy way I've seen before is to give it a few inputs and outputs that loop on themselves, maybe the AI can be smart enough to figure out how to store useful data in them?
For better maze navigation you could use the SLAM algorithm (Simultaneous Localization And Navigation). There are python libraries for that, although I never used them. SLAM is made to create a map of your environment while positionning yourself in it. To use SLAM you will probably have to use a depth field (which you do) and an estimation of your speed/traveled distance between time steps.
VEEEEEEEEEEE UNOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO DURNMUMRUD MFUDM PAPPAPAPA FDURDRMDUMDURMRURMUDMJRMDURU AOAPAPAPPA XURMDURMDUMRUDMRJRJRURU PAPAPA KILL IT DURMDURMDURJDMRMDJRMDJRMUDMRDRM PAPAPAPAPA KILL IRT KKI KI KI LL IT