Rather than using GPT-4 to micro-optimize these already hyper-optimized functions that get called hundreds of times per frame, have you tried getting it to optimize at a larger scope? E.g., providing suggestions for some bigger frame main loops, or how data in RAM is structured (just random ideas)? It will be a lot harder to form good prompts, but the potential improvements are also much bigger.
This is a good suggestion. 👍🏻Because you'd asking ChatGPT to do a diagnostic about the Overall Performance of the game, and provide you with Tips about the most important issues first. Then, after profiling, you could ask ChatGPT again, module by module, issue by issue: What's wrong, and how to improve it?
yeah idk why he expects chatGPT to pick up on these specific hardware quirks to optimize one function at a time, lots of stuff that he does to optimize for N64 hardware makes things run slower on regular hardware because of the differences in RAMBUS speed and all that. Even in the last video about this he says something like, "well this would be better but not for the N64's hardware because of XYZ," like computers are good but chat GPT would have to have some absolute magic under the hood to figure that out on its own without just reading off of forum pages. It would be cool to see if chatgpt can look at the entire data structure from a different angle tho, maybe there's something way outside of the box that he could feed it
`restrict` is a great feature and it being so uncommon is the biggest reason why Rust code can sometimes be faster than C. Rust has extremely strict aliasing checks and it uses them to insert `restrict` pretty much everywhere in the code, letting the compiler know that the memory locations won't change unexpectedly and saving memory loads. It can cause undefined behavior though if the parameters alias anyways, so you have to be careful without an automated alias checker.
ChatGPT doesn't remember things beyond a certain limit. I had it helping me to come up with ideas for a story. It was really good with remembering bits about the story until it wasn't. I once also asked it to help with writing a report using other reports as a sample. I told it, "Do not write the report without me asking for it." "Okay cool!" After several prompts, it started writing the report without me asking. It still does take a lot of the bulk work out of things.
To stop it forgetting bits, you can edit a message to the bot with a new query and it will "erase" all the responses after that query and begin writing a new response. You could put all your requirements in at the start, then just keep editing your second message to the bot with new functions to optimize and it would not forget your requirements.
I wonder if an AI like this could help decompile and/or optimize other SM64 romhacks or other games. Would love to see fully optimized versions of Last Impact and SM64 Land running on hardware.
I'm a Game Designer and this is pretty close to how I've been using it too. I'll give it detailed messages about the intent and work with it with the code it creates to refine and change it as I go The message round limits and memory retention are a pain though. Wish there was a way to permanently give it goals and rules for the conversation. And the code cutting off is annoying. I always have to tell it to retry from the last line or function and continue onwards.
I haven't downloaded anything from this man not since the Doki Doki literature club memes But I love watching his videos,in the hope of one day he'll perfect Super Mario 64 and add another multiplayer mod alongside of it, And he's kind of an entertaining fella
As someone who doesn't code I didn't expect to watch nearly as much as I did. ChatGPT was trained to be as inoffensive as possible in its verbiage, so it takes many pages out of the customer service book. Using this wishy-washy soft language is incredibly frustrating for technical tasks, but the average end user NEEDS the stupid pandering and coddling or else they won't continue to engage with the chatbot. Hopefully in future iterations, when models can branch off into their own self contained systems, we won't have to deal with coding modules with the same speech patterns as WikiHow guides
Kaze you are a psychic. I legit today thought that maybe I should message you and tell you to make a new video with ChatGPT, but using the latest version, since last one didnt go so well. And then I see you uploaded this..
RE: the mfix mulmul thing is a real bug on a few retail N64 hardware revisions (early ones). If you do 2 muls in a row, the 2nd mul could have an incorrect result if the former mul uses an 0, inf, or NaN. The nop is added as a fix.
i'm aware that it's a real bug - but the mfix flag adds way more NOPs than requires and it doesn't schedule instructions efficiently any more. that's why reordering the instructions can help GCC here. usually GCC is smarter, but mfix kinda makes it dumb with multiplication
Pretty soon we will have an AI that will read your entire source code and always have ALL the relevant info. Then none of the problems Kaze was having will exist.
For a tool seemingly designed to replace interns I'd say it did very well. Interesting times ahead if email jobs start going the way of the Appalachian coal belt...
I don't use chatGPT for programming, but when I was playing around with it I asked it about Rust. It does NOT understand Rust at all, just confidently lied multiple times until I made it admit it was wrong. One of the problems I asked it about, it gave me an empty function with a comment basically saying "solve the problem here...", multiple times until it gave up lmao Not sure if GPT4 is better, but chatGPT doesn't understand soundness of code at all, just roughly how it looks
This is a great video, not even considering the interesting subject matter, just as a video about AI. There should be so much more content like this; there's a lot of videos about AI news or about its implications, but not nearly enough actually showing the process of working with it on specific things, with commentary.
The fact this doesn’t have more views is weird, this AI can program for the fucking N64 (given the person using it knows how to as well). It’s clearly not great at optimizing, but it’s p good for making it more readable. This’ll definitely be useful for adding new shit to a rom hack. Edit: ok 22:06 wtffffff it’s a miracle that even works
you know you're a nerd when you watch a nerd nerd out over an ai while writing code.... and you notice their typing speed and wonder if going flat out if they would beat you into the ground doing your best
wow I was actually talking to chat gpt4 about N64 and how games are coded the other day. I was asking if a larger capacity expansion pak (I think it theoretically has a potential max of 16MB) would allow for increased framerates and/or higher resolution textures, and how simple adjusting code for games would be so they take advantage of that extra RAM (answer: it would not be simple and each game's code would require it's own personal attention.. but chat gpt4 could explain how the coding works in regards to RAM usage and rewrite accordingly). just sayin- weird coincidence!
ChatGPT does eventually "forget" the chat history even in an active conversation. So while I don't believe it can help much, re-prompting it with first prompt every like 8 lines may be helpful.
I haven't tried/paid for gpt4, so I can't guarantee this still applies, but that's exactly what I would have suggested. You could even copy paste part of the promt for every optimization (optimize the following function for minimum possible memory size on a N64)
I think ChatGPT with GPT-4 has something like a 8000 token context window, where one token is a few characters. Theoretically GPT-4 allows for a 32k context window, but I think that is not available yet.
know this comment is old, but as I understand, GPT4 has a token limit of 8192 compared to 3.5's 4096. On average 1 token is about 0.75 words so GPT4 can only parse about 7,000 - 7,500 words. The model doesn't actually remember anything, the ChatGPT interface re-sends the entire conversation history with your message. When the token limit is reached on the website, it asks ChatGPT to summarize and shorten the conversation history to make room.
Kaze bullying the AI into admitting it doesn't know how to optimize sometimes and the AI bullying Kaze into having readable code is fun to watch Also as a programmer with zero experience of this particular field I kinda like your style of writing. Its like a puzzle sometimes.
@@clouds-rb9xt i guess it can be understood as such, as criticism is a way to improve and get better, which wouldn't take away from his achievements and abilities. You can be a very good programmer and implement hard systems easily, yet have messy code that is hard to read.
Does anyone else find the role reversal here hilarious? Like the human is complaining that the computer isn't being exactly specific with its language use and just kind of being hand-wavey. 😂
You should try using the original code and optimise it using the GPT4 with a bit of input from your side and see, how close to your result you'll get. If it is even halfway as good as your optimalisations, it would be awesome.
ChatGPT is so advanced and smart it picked up several misconceptions about how compilers work (e.g. "unroll everything", "ternary is faster", "I can remove repeated calls to this state-altering function because I'm a Haskell programmer")
@@NBDbingo5 It's because it makes your code more visually compact, so people might confuse it for being faster. As you go up to higher and higher level languages there's all sorts of little things that look beneficial but aren't. Always measure your performance changes. Of course, ternary isn't faster or slower, it's going to compile down to the exact same branch instructions that an if/else block would.
I don't know for certain but I think that it may make a difference with compiler optimizations off (like with a debug build), a ternary could actually be faster because in some cases it can generate a single conditional move instruction rather than two branches with different move instructions. But if you turn on any kind of optimization, which kaze clearly does, then the compiler will just realize that they are the same thing and generate the same assembly.
@@SuperSmashDolls Ternary if being expression-oriented instead of statement-oriented is a bigger than than visual compactness IMO. I'm also a filthy Haskell & Rust programmer though.
@@chikato7106 And you sound like an Autistic person. Why being so ignorant? It is about all the info. And that is not 2+2=4. I mean, I get your point. But AI can come with smarter conclusion if it says: "NOT MUCH to optimize".
dude you're asking chatgpt absurdly specific, near impossible questions lol. the fact that it can even answer in a reasonable way is already so impressive
This is a great video. I was unsure how improved gpt 4 was. I don’t even know much about programming but the difference is the immediately noticeable. Can’t wait to load this rom hack onto my steam deck.
My programmer friends tell me when they use it, it gives some awful results, like it trained on tutorials for beginners taught by beginners. Then, then they try to train it to be otherwise, it still yields the same results. Perhaps that's why I see all kinds of articles about Chat GPT isn't AI, because it doesn't know anything past 2020 - 2021, and can't learn past that.
Chat GPT is so weird to me with all the ethics junk around it. It seems great for things like this where you obviously understand the code and are trying to optimize /debug the code further but I've overheard several of my college classmates just using it to write their coding assignments. Great video though I really enjoyed it!
I'd say ChatGPT is essentially the next stage in high-level intermediaries when it comes to programming ethics. Universities are just going to have to adapt.
There's so much junk around AI discussion in general. Tools are obviously incredibly useful and you can incorporate them to optimize a lot of things, but the discussion around it got so toxic that we have delusional "AI truthers" - basically same people that fell for NFT scams and are raving about random useless "brand new insane ChatGPT tools", on other hands we have "AI denialists" - people that seem to want to enforce copyright to ban AI completely but at the same time act like AI can't produce of anything remotely of value when it's obviously untrue. It's really a headache.
Yeah that is one thing I don't like about chatgpt, it doesn't say that it doesn't know, because it is programmed to talk like the people that do know. So it would rather make something up than not saying anything at all, the product is talking to the machine afterall. Which is one of the reasons it just doesn't work as a replacement for a search engine or stuff like that. The main priority is not give you accurate information, the main priority is give you an answer emulating convincingly human speech.
@@AmaroqStarwind Yeah but when there is no information available or very little information available, it will just make stuff up. That is not something you can simply fix with giving the A.I more accurate information, because sometimes that accurate information doesn't exist. The A.I would need to make a judgement call regarding what it knows... But it can't, it doesn't actually understand what it is talking about, it is simply emulating how people talk about it.
@@diegog1853 I mean, that's pretty much the internet. Take stackoverflow for example, it isn't like it is bullshit-proof, but rather that there's a community that often calls up on people's bullshit when the posts are sufficiently recognized but this is only because stackoverflow has very strict rules regarding ratings and answers, but when it comes to articles or discord conversations, people throw bullshit left and right without anyone bating an eye. I would say ChatGPT is almost human in that regard, it'd rather give a nonsense answer rather than admitting it is wrong, which is something very common but since it is a machine, we just expect it to be perfect. If you have this in mind, it isn't as bafffling that it'll allucinate with some answers
I'm afraid, even GPT-4 can't remember all of your history. I think that you need your pre-conditions to add for all your optimization commands. (it might know the history of the last 10 cmds/resp. but I wouldn't trust it)
The fact that Chat GPT can do any of this at all is absurd to me. I can get it figuring out how to write basic English sentences, but I never would have imagined it would know how to write code. Like it's obviously not perfect, but even the fact that it could optimize your code at all is fascinating. Once this tool becomes more advanced it'll become even more helpful, I imagine.
Don't know if it would actually improved code speed/size, but about the loop going to 6 instead of 3, you can actually do that. i / 3 gives out 0, 0, 0, 1, 1, 1 and i % 3 gives out 0, 1, 2, 0, 1, 2. I think it just did some array assignments wrong.
I love how Kaze is talking about GPT like it's a person. GPT makes a recommendation and Kaze calls it a nerd due to the obscurity of the recommendation lmao
You work with it long enough, definitely starts to happen. GPT started hitting me with jokes and emojis when we were stuck on a function for a long time once. Pretty wild 😂
I think when it says "there isn't much room for improvement" its literally means, within the constraints its using it cant do anything. Its silly but its kinda like saying "i cant fit anymore water in this cup". It might seem annoying due to the repetition and post limit but explaining why something it said didn't help *should* push it a step further to what you want from it. AFAIK this version of chat GPT actually remembers things its told, especially when repeated a couple of times to overwrite old wrong perimeters. While I'm not that into AI/chat GPT, these videos are really entertaining.
This video actually shows why using chat-gpt, even version 4, would actually be detrimental. Sure it gave less errors and yes it might provide some optimizations but you had to correct/ignore WAYY more suggestions than you decided to use. Only way you knew to do that was because you're not only an expert at coding, you're an expert at this specific niche. If a junior or new programmer used this, not only would it likely stunt their growth, it will often slow down production or the code would be way worse.
Growth is completely dependent on the user. I think that a level of understanding can be reached by viewing the possibilities, but anyone that chooses to use this tool and take it as perfect fact isn't the problem of the tool itself.
@Droidsky it is not, it is dependent on how dependent the tool makes the user. This is the truth for all technology, it's just more detrimental in some cases. I'd argue it's pretty detrimental here.
@Coolidge well I can't control other people, it's availability and ease of access is scary considering what it can do; but I don't think that this technology shouldn't exist just because some people will be able to spend more time with their feet on their desk, or publish some half-assed project written 90% by a bot.
Idk. I'm a noob programmer and I found Chatgpt very helpful for helping me understand some of the basics - like a friendlier stack overflow almost. But for really expert niche content I can see how it's an issue.
At the end of a chat GPT session, you should ask it if it can optimize the code but, show it the entire file. If there is no limit to how large your query is that should be interesting.
restrict gives information about pointer aliasing to the compiler, it means that no other pointer will touch memory pointed by a restricted pointer, so you should already be able to tell when restrict will actually do something. Also there are aliasing rules in C that states that two pointers of different types cannot point to the same memory area, so restrict won't do anything either in case you're passing multiple pointers of different types to a function (and all memory accesses are through those pointers) Also if you're prompting multiple functions to optimize for, then instead of appending, you should edit your prompts to ask it to optimize another function, else you're polluting the 8K tokens of context you're allowed and it will forget about what you said earlier in the conversation (that you're targeting a N64, etc...)
I use gpt4 for professional financial analytics coding. Best trick I can give you is to have a set of contexts ready to copy/paste into your prompt. Always - always - remind it of the most important context for your prompt. Even if that means a simple questions ends up being hundreds of words. Also you can tell it "only reply with code" to get back less useless explanation.
“There isn’t much room for” is English for “zero chance” not “there is some room, if you ask me about it” This is due to the indirectness baked into their language. “Politeness” they call it. “Inefficiency” I call it.
Hey Kaze. You're only looking at the size of the code... but wouldn't it make sense to also see which instruction it uses? e.g. modulus operation is clearly slower. So if you can avoid using this but have more code... that would be a good trade off, right?
what really matters here is the RAM. each instruction takes 8 cycles on average to load into RAM and it will hold up the RDP while rendering. running the whole game's code right now takes around 9ms in course 1, so we could easily render it at 100fps from that perspective. but because the memory is shared, every time we load an instruction on the CPU, the RDP is hung up until that's done. this will cause rendering to slow down.
It has to be remembered that at the end of the day GPT is a generative LLM and therefore a token predictor. A glorified phone keyboard word prediction app. It doesn't "learn" but prompt engineering provides additional context to the model to predict against. It's useful for sure but context and understanding its limitations are key.
huh, I never thought about it that way, though I don't know if I fully agree that having a tool to analyze and explain code means you can just start writing horrible complex disorganized and deeply-nested code. You should be able to come back to your own code a year down the road and not say "...what the fuck?". Though to be honest, your code doesn't look half bad. The worst thing I saw was probably that do-while-if-if-if structure in geo_process_node_and_siblings, but even with that, although you nested 5 indentation levels, you still wrote it in such a way that it's not hard to read. if-else structures get hard to read when they are deeply nested and there's a lot of else-if clauses. Maybe your industry peers are less competent than I (especially if you're calling them human-shaped monkeys on typewriters).
There's a max length to ChatGPT's responses, so that long function it didn't complete was probably just too long. You can ask it to continue where it left off, but it doesn't always work, it can kind of forget part of the context and give you the wrong result. Keeping more context in short term memory is extremely computationally expensive, so that's a limitation of current AI.