check out HubSpot's Free AI Task Delegation Playbook here! clickhubspot.com/s6ef As for the new paper gamegen-o I've shown at 12:00, not actual interactive demo is shown. Looks like the demo is heavily cherry-picked. So yes, gamengen is by far the closest example to an actual AI game engine. edit: Looks like they deleted their gamegen-o project, with no reason specified. Odd for sure.
Very interesting! I imagine they were trying to make a point by not providing any game state, but you might achieve better results if you provided some basic game state (eg location history, health, time in level) as inputs and outputs.
@float32 that could be quite helpful. In general my concern is that if you went AFK for a minute staring at a wall it could lose all context. You could have a low res "rear view" camera which is used to avoid that problem, and not displayed onscreen during play.
This technology could probably be tweaked to work a lot more effectively by buttressing the AI some deterministic elements.The domain-specific generation would probably be much more stable if had a proverbial ariadne thread to refer back to.
Honestly a very obvious use case for this would be DLSS-like frame generation, but with the ability to move the camera around and it not being locked strictly to source framerate and being able to generate a ton of intermediate frames.
Basically people have to understand that the impressive part doesn't come from when it's a copy machine, but when the new stuff it makes actually is useful, which is not the case for that DOOM model. One good way of understanding the generative AI wave that I figured out recently is roughly comparing it to a lossy zip compression, your prompt is mapped to a "hash key" (embedding to be more true to the theory) that points to what to decompress, but since it was a lossy compression, the result will not be 1:1 with the original data, but a weird blur that might happen to be good new stuff and that's the part we should aim to improve.
I wonder why no one mentions Game Gan and Gan Theft Auto that did basically the same (frame prediction with input and previous frames conditionning). This was like 4 years ago!
Yeah, the attention the paper has been getting feals like just clickbait. It’s just a fairly normal RL system that they are letting you use the controls while visualizing internal state representation and prediction instead of running the game. Which is kind of cool for interpratibily and figuring out “why is my agent going insane”. But definitely not an engine…
Every time someone says we're far away from solving a particular problem with today's AI models, they're proved wrong in a very short time. We'll revisit this next year.
It's not even very good at generating non generic cats. All generative AIs suffer from "average" bias as a result of it's very way of operation. And what people call control is pretty laughable next to what you actually control when making art. But it's fun for generating memes and some throwaway art for presentations. Oh, and for having fun placeholder art in a game proof of concept. My programmer art is now pretty.
@diadetediotedio6918 I'm literally talking about people being wrong about AI after breakthroughs are announced in record time, yet you're asking me what if I'm wrong...? Wrong about what? Wrong about people being wrong? I'm not here to engage in circular reasoning. I think if you paid attention to AI development every day, then you'd be asking better questions.
I was about to ask about Nvidia & Tencent research, there's other Individuals who tried to create this neural game engine, but claiming "the first" is something need to be evaluated again.
Vi un sistema operativo rudimentario donde programas, menúes y carpetas no habían sido programados sino que corrían sobre una red neuronal que había sido entrenada con varios tipos de kernel Linux al mover el puntero y apretar los botones, esta respondía igual como lo haría un software codificado, me estalló el cerebro 😅
I think people are looking at this from the wrong angle. This won't be about making an entire game engine where you can generate games with text to speech. This will be more about creating a game with incredible unfeasible graphics with amazing lighting and whatnot, and then deploying it to machines that can use AI hardware to play the game rather than a traditional GPU to render stuff. At least, that's how I see it. You make a game that runs on a supercomputer then train an AI model from it. This is how you get to movie level graphics in games without having to brute force render physics and lighting calculations. Or you could have a traditional game engine running under the hood that handles logic and whatnot, but it's just data, and that data is used by AI to create what you "see" with a pure AI rendering pipeline. Very exciting stuff.
Yeah though the latter approach doesn’t necessarily need this kind of model. Something like Runway’s newer video to video model can do improvement of more limited graphics and I’m sure more can be done in that direction.
John Carmack asked on Twitter if anyone had actually got this running on a consumer GPU. Has anyone actually got this running at home? It should be theoretically possible, but so far, I've seen no evidence.
Having an engine that generates an entire game automatically would be impractical, to be honest. There needs to be human oversight in the process. While using AI to simulate physics, provide instructions for NPC behavior, or create area descriptions could be useful and make development easier, relying on AI to fully design a game is asking for trouble. People who think it's a great idea often overlook basic principles of human creativity and integrity.
Layman question: I know neural networks don’t *feel* deterministic, but aren’t they? Don’t they give the same output given the same input/seed? Genuinely curious if you use the term as a reference to the black box nature, or if I have more basics to learn lol
Using a neural net to simulate Doom IS deterministic, because it‘s reproducible given the same inputs/conditions. I think what you mean is that it‘s non-programmatic, i.e. not written in a programming language.
Wonder what will happen when you train it on a foto realistic game. Maybe this could be run on less powerful hardware. Although this probably also needs very fast hardware to run
Good LORD, I loathe algorithmic jank. :/ Especially when there are--in this specific case--tons of incredibly well-designed D1 total conversions, maps, episodes, upgrades, fixes, etc. made by HUMAN BEINGS.
Hi ByCloud, I’ve been following your content, and I truly believe you have the potential to reach incredible heights. I specialize in helping creators like you amplify their reach, and I’d love to discuss how I can contribute to your success.