2 Common Questions I'd like to answer: 1) Why not just pass a vector pointer to the lateral distance function? Because load instructions under MIPS have a constant offset you can provide. By assuming how far away from the pointer the offset is, we can cover one special case in 1 cycle less and have all the other cases work in the same amount of cycles. Making a special case for the lateral distance function and then constantly repurposing it is an optimization! 2) Why don't you just use inline assembly? - I need my code to be in this exact spot for cache coherency and using inline assembly mips GCC style is too much of a pain. I wish there was a way to just write assembly?? But somehow there isn't.
> But somehow there isn't. Not sure if sarcasm, but you already built a custom GCC. Why not build GAS as well? It builds along with GCC if you configure the project with --with-gnu-as.
In 5 years, Kaze will have moved on from coding entirely and will be building this game transistor-by-transistor to improve on the inefficient architecture of N64 cartridges.
"how much did you improve the performance?" "by roughly 2 fps." "what did it cost you?" "i can no longer read the code, let alone make any changes." "was it worth it?" "you bet"
Gonna make the most incomprehensible code ever, not even the most advanced of AIs being able to accurately read it, to make the game 0.1% faster, worth it (dw it adds up to 0.7% with other stuff)
"what did it cost you?" "the resulting binary can no longer be disassembled because i use invalid opcodes, you have to poke a NOP at certain memory locations, disassemble it and then add the missing instructions back, by hand."
You realize it's completely worth it because it's a systems function that doesn't needs being read or changed. It's like that one fast inverse square root function. Literally who cares if it's impossible to follow?
Man, whatever glitches you miss when you release this are gonna be wild. You're playing with extremely volatile logic alchemy here, even more unsafe than Pokémon Gen 1.
i always find the most unreadable code is when they start utilizing undocumented bugs in hardware because the code your looking at even if you 100% understood it, is not doing what you think it does. or when they start using undocumented instructions that do a weird combination of things. this is what makes making emulators so hard to make.
Luckily the N64 doesn't have too much of this on the CPU front, the only bug is shown in this video. Theres also no undocumented instructions. Where for example the NES would partially execute and effectively combine multiple instructions when encountering an unknown opcode, the VR4300 (the CPU used by the N64) throws an Reserved Instruction exception instead. Deterministic cases of undefined behaviour (e.g. divide by zero) could in theory be abused, but im not sure if that ever ends up being useful.
I work in a codebase that is at least 60 years old. Looking at older versions written for IBM mainframes, everything looks like this. My favorite was, instead of having a boolean array for if a region had a particular property, they instead stored the machine code for "no-op" and "jump to function" where relevant and executed the array directly.
@@uwirl4338 Java, server management. So my optimization is more in the form of SQL query optimization, data manipulation, multithreading, etc. However, I did dabble in asm and low level C in College. Also, I love watching videos that go into such detail for games, like Kaze Emanuar or Retro Game Mechanics Explained, or plenty of other speedrunning channels that talk about things like exploiting arbitrary code execution vulnerabilities!
@@uwirl4338 Well, most software engineers (at least those using Java and C#) work on some sort of "business software". And for this type of software maintainability, extendibility and "getting it done on schedule" is way more important than performance. In the end it's usually a lot cheaper to throw more cores or more memory at a problem, than paying a guy 6+ weeks to optimize the code. And if it is ever necessary to optimize the code, it's usually sufficient to switch to some other algorithm or do "high level" optimizations, not actually chasing a few clock cycles in each function. Oh and obviously: Java and C# aren't even compiled into machine code ahead of time, so all of those optimization schemes wouldn't work in the first place. Never the less it's damn impressive to see these techniques in action and I love hearing about them.
Honestly I think most of the functions here (except the last one??) would be fine if they were explained with a page-long docstring explaining the theory behind the behavior of each function. Novel-length docstrings are also a good signal for junior devs to stay away from that code.
For anyone thinking of implementing this generally, note Kaze is optimising for his specific purpose on his specific system and his specific compiler, and is testing everything in Compiler Explorer. Don't start doing this stuff needlessly in your own code blindly just "because Kaze said faster". If you're not writing an optimised N64 game then you need to do the profiling, research, and proof work all over again. Plus you actually need to save 1 or 2 cycles, which is unlikely on any x86 computer made in the last 30 years.
Plus you have to be super careful not to over-optimize for a specific implementation of x86_64, which then becomes horribly slow or even downright broken on others. These sorts of micro-optimizations are almost never worth implementing unless your target hardware is 100% fixed.
in some old games that super optimizations in the mid to late 90s i saw that in their source code they use to keep 2 version. A more clean version for porting to other systems and a dirty one that was platform specific. I remember some games had both a C and ASM version in some cases.
i'm surprised they actually cared that much to optimize code. that's a neat way of handling it. luckily for sm64, most of the functions still have an equivalent of the base decomp if anyone wants to port this stuff. (i also usually leave a pure C version of the code above it for everyone to be able to figure out what the code is supposed to do)
@@KazeN64Why do you "force" gcc to produce the machine code you want instead of just implementing the function in assembly? Is it the calling overhead?
@@uwirl4338 its just really annoying to do inline assembly in GCC. it throws so many errors. it's easier to get it right by just throwing in small assembly snippits into the C imo
This reminds me of the story of Mel the "Real Programmer" from the jargon files. Utterly indecipherable code which nonetheless is so good it's practically magic.
Oh my god, this was amazing, the ending line "you're welcome" is pure comedy gold. I'm never gonna use any of this but I am sending this to as many people as I can.
@@KazeN64 Are they N64-specific or your-game-specific? Is your GCC fork public? I can imagine the libdragon guys being interested in this (unless they already implemented the same tricks in their toolchain themselves).
@@the_kovic easyaspi made it and it is publically accessible! although she's only posted it in my discord. but i have the custom build linked in this repo's readme
@@KazeN64 Wait, since this is a custom fork of gcc. Is there a way to patch-in a sort-off assembly passthrough in the compiler, so that you can write in both C and Assembly without doing the weird inline assembly trick you mentioned? Probably overthinking things as usual, but who know.
I've had this idea for a while but I'm not sure how to implement it. It would basically be like Godbolt Compiler Explorer but it also showed you clock cycles. Like if you highlighted a section of code, you know how it shows the total character count? Imagine that but it also assembled it and displayed the byte count and clock cycles.
As someone who's intimately familiar with the RISC-V instruction set, it's interesting to see how MIPS, the N64's instruction set, is so similar that most optimizations would still apply to a RISC-V system. A stand-out example for me was the global pointer optimization avoiding a two-byte LUI-sequence at 4:11.
Most of the time I treat math functions as black boxes anyway, so the fact that an evil wizard built them with black magic doesn't bother me. Unless something goes wrong and I have to open the box...
_Kaze's next video:_ "Now we're going to delve into the apocrypha of the forbidden Old Tongue spoken only by native R'lyehn because it'll save us a whole 3 cycles and regular code just won't cut it."
Same. I mainly deal with powershell and .Net and there are times that I don't like the way the interpreter handles something, so I end up calling a .NET class because its faster at the expense of code readability. But hey, if I am needing it to do this thing hundreds of times within a large dataset it is worth it. Always add comments in case you forget why.
Kaze: Uses "goto" to swap logic for instruction caching Me: YOU WHAT?! Kaze: Overwrites functions with memcpy for a properly optimized call path to avoid bugs per console Me: *I AM CALLING THE POLICE!*
i have honestly never seen the goto thing before, both are illegal but i'll permit it I'vd overwritten code with raw machine code bytes at runtime before. e.g. a E8 Call on x86. E8 00 00 00 64 calls func 100 bytes after end of that instruction
7:04 You can avoid this issue entirely by moving the decrement of the loop variable into the body of the loop so it executes *after* the loop condition and *before* the first loop iteration. This is also very useful for decrementing loops that use unsigned integers.
Back in the day running your for loops backwards was a huge time saver in JavaScript running under Internet Explorer 6. Because it was Internet Explorer. Just have to be careful the execution order of the items being looped through doesn't matter...
@@tolkienfan1972 I'd even go so far as to store my data backwards sometimes if it needed to be in a certain order. Makes me wish assemblers supported something like: .backwards db &05, &10, &15, &20 .endb
Personally, I see no problem sacrificing readability for performance, as long as you A document it, and B the performance gain is relevant for your use.
as a gamedev of intermediate skill, i feel like this video is expanding my brain at such a dangerous speed that it will be permanently damaged. perfect video, i'll watch as many of these as you make
1:45 The lie you tell the compiler here is a neat trick, but I wonder of the cast can be avoided with an assume attribute instead to just tell the compiler that the value is in a valid range?
@@tolkienfan1972gcc's attribute assume is even smarter, since it doesn't semantically execute the code. That means you can tell it to assume things about the result of variable modifications.
Re: 4 this is so cursed and amazing Re: 7 separating the "hot" and "cold" branches of a function is a good one. There are a few variants of this that may or may not help (usual caveats: depends on arch and runtime patterns). Instead of using "goto" it might be better to call a separate non-inline function. You can put all the "cold" branches in a separate section, so that more of your "hot" paths are in cache. In cases when the "hot" path is very small and common, you could also inline the hot path. Re: 13 at some point it is clearer to just go full asm :)
Well, tbh the principles are more straightforward than that. When you write code, your very first objective is to make it produce the desired result or outcome. Second, you make it robust - you take into account edge cases and variations. Third, you optimize it. And you have to do it in that order specifically since there is no sense of optimizing a code that doesn't work. And when you optimize you basically sacrifice maintainability - aka you don't expect to change that code much ever again, it is the dead end.
I'm pretty sure casting a function pointer to another one with a different return type would be undefined behavior if you didn't build for a single platform. It's funny in a sense, you don't have to ever worry about undefined behavior since you can always test to see what the n64 does.
that seems like something that would be undefined in terms of the outcome but it would have a pretty clear definition as to what it should compile to idk the whole idea of "undefined behavior" kinda vanishes when you have 1 compiler and 1 architecture. everything's defined by how the compiler works in the end (assuming there is no random noise it pulls from)
I've been coding for over a decade, and i kinda feel the same lol. There's a lot i get, but when he gets into the bit shifting i just have to trust him.
Its really useful for hardware interrupts. On gameboy and genesis I point the vectors to uninitialized RAM and memcpy the desired interrupt for a particular level. There's no sense in wasting time checking a condition over and over when you can just check it once
6:52 - same with 68000 - it has a "DBRA" instruction - "Decrement and Branch", which decrements and branch if not -1. However this means you MUST write: for ( i = count-1; i >= 0; i-- ) This will NOT produce DBRA (at least not with my version of GCC): for ( i = count; i > 0; i-- ) ...even if "i" is not used in the loop body! Instead it does a standard SUBQ+JNE which is slower. Same trick, but an annoyingly subtle MIPS-vs-68k difference.
That is so frustrating. When I'm doing 68k asm I usually would do something like this: foo: MOVE.W #9-1, D0 DBRA D0, foo for a loop that is intended to execute 9 times
8:15 I work as a graphics programmer for an engine at my job and this style of coding is incredibly common on the low level parts of the rendering pipeline. I always thought the person who wrote these was insane, but now I finally understand. *They really are insane*
At some point it's clearer to write the exact procedure you want in assembly than to try to trick a compiler into generating the exact assembly you want with cryptic hacks. You have reached that point. 😛
Or having the clean code versions available somewhere to indicate what they're supposed to be equivalent to. I know Kaze has said there are usually equivalent function names in the base decomp project, but I'd love to see optimized, even if not fully, versions of those, but with readability still intact, if available.
Games for TI83 calculators can be pretty small if done well. I made a 5-level puzzle-platformer in around 1.5kb once. But the Bubble Bobble game I made was much bigger, somewhere near 22K.
I love your videos so much, optimization exactly like this is my favorite activity on the planet. Seeing a new video from you is like christmas coming early.
It's scary to think, but it seems like that self modifying code trick could actually make a big impact in some real use cases. I mean you can potentially save millions of if-checks with it.
I have done something similar in a Javascript game I'm working on. I had no idea if it was even faster or anything, I just thought it was cool that I could do it. I used it for remapping keyboard inputs to different functions depending on if you're in a menu, a map, a battle, etc.
And so now we know why the N64 was the "less performant" console in spite of the lack of disc read times. To squeeze all the potential it had out of it, you had to be a fluent practitioner of the dark arts, on top of breaking several coding Geneva convention guidelines.
The compare with zero trick also works on the NES, SNES, and all other 6502-based systems. Whenever you do an arithmetic operation, it sets the equal and carry flags with an implicit comparison to 0.
This is the most interesting video series I've ever seen. I want more exactly like this for other games or pieces of software we're not meant to have the source code for.
I've heard many people talk about decrementing loops being faster over the years, and many people also claim it's false, but this is the first time I've heard anyone so clearly explain that it's about the comparison against zero. It all makes sense now.
Hmm pretty interesting optimization techniques, most of the time idk whats actually happening but your explanation is very understandable, great video Kaze!
Kaze, I am a huge fan of optimization and "form fits function". Even though I don't understand most of what you are doing here - I love watching this content from you. Thank you and keep it up!
An excellent video! I laughed so hard at some of these, partially bc I've had to do the same with microcontroller compilers. When you've got a dinky little 12 MIPS mcu, counting cycles can be important, and improperly handled casts and such by the compiler can eat into that.
Kaze wants every every line of his code to be tagged with "WTF" code commentary by a 3rd party looking at the code. The same way Fast Inverse Square Root was commented on in Quake 3 code (Not Doom).
This really feels like we're delving into the dark arts at this point, learning things mortal man was not meant to know & developing dark powers out of reach of the gods. All to make Mario run a little bit better!
... What I'm getting from this is that when you will be sent back in time to make the fully optimized Super Mario 64, they will either kick you out for writing this code or praise you for your genius.
They didn't had the same compiler. So, some of his hacks won't be usable. Back in the day, if you wanted power, you wrote assembly functions and used C as "super assembly".
The comment about your first ever programming undertaking reminded me of a elementary school friend i had who spent their time, at ten years of age, hacking mario world by simply editing the rom directly with a hex editor. Great video!
Had a coworker that wrote a small script in python cause it is "So easy and fast to write". When he used it he noticed it was too slow to be used at all (simple test-cases with inputsizes in the range of maybe 50 elements would work, we needed thousands). Then he spent weeks trying to make his python faster. He ended up writing it in C++ and was done in 2 weeks (and a couple thousand times faster program).
I did some advent of code challenges in PHP this year & BOY IS IT SLOW. One of my solutions was a brute force of about 4 billion operations, which i broke into 10 processes & then each process took about 3-5 minutes. Then someone in YT comments said they brute forced in Rust & got it done in a minute or two. To make it better, my brute force didn't even get me the right answer! Lol. Also by "operation" i mean ... a loop, basically. Each loop had quite a few things to do.
There was another advent of code challenge i did in PHP that was computationally intense (+ LOTS of memory access) where i was able to optimize the heck outta my php though & actually make it work. That was my first encounter with memory bottleneck being a real problem.
@@reed6514 Well PHP is (up till very recently) an interpreted scripting language. Will take some time to run anything compute-intensive. It is a wonder how powerful a website-modifying scrip has become.
since readability is going out of the window, i think kaze should have a code copy of the engine before this dramatic changes, because when he released the source code of rtyi it will be so hard to read and tightly coupled to its function that no other hacker could made a new hack from that source code.
@@Minty_Meeo it is just having a copy of the previous code. With all the previous performance improvements it should be enough for the majority of hackers.There is no need to further improvements from 99% of the rest of mario 64 devs.
Generally when writing a lot of complex stuff like this where it's not obvious what it does, it's not so much that you can't do it, but they really want you to write code comments along the way to explain to other people why it's done that way and warn them what kind of scenarios could break it. So the more inline assembly you use, the more code comments you need to go along with it... that is to say, the less human-readable the code itself is, the more you have to compensate by having human-readable translations alongside of it. For that code, you'd end up writing a novel explaining it to any programmer that comes after you... and most people would rather just make the code itself more readable than spend half a day explaining their heavily-optimized code to a hypothetical newcomer in the code comments. But when it's absolutely necessary, you will see tricks like this used along with extensive documentation (and sometimes cursing and F-bombs about how pissed they are they had to write the code this way and how much they hate the compiler depending on how professional they are).
Wouldn't writing some functions entirely in assembly be more readable than having to inline some asm instructions here and there to force the compiler into producing the asm that you want? At least for short functions.
GCC is also very enthusiastic about exploiting undefined behavior, and will usually use it to do things you don't want. You have to deliberately disable certain optimizations in order to get away with faster code, which will just make code elsewhere slower.
@@undefinednan7096you don't get the benefit of inlining in that case. I would definitely recommend more inline asm, but I'm working in 68k which is probably a bit easier to write than mips. You do get used to gcc's inline asm syntax eventually, it's all over my engine now.
@@Ehal256 good point, and I don't think you could use LTO to easily fix that. I'm not so sure that 68k assembler is actually easier to write than MIPS asm (a lot of places used to use MIPS as the intro to asm)
First, great video, but I'd like to point out a few things for people who might not know them. A key thing that cannot be emphasized enough is that you need to measure whether these sorts of optimizations actually speed up your code. For example, the self-modifying code trick will often make your code slower since to make it work reliably you'll need to invalidate the icacheline (and flush the dcacheline if you used cached access), and the possible extra reads from RAM will often cost you more than you gain. Also, I think you know this, but for other people's benefit, you want to modify code while you're executing far away from it (MIPS cpus wont execute modified instructions if the instruction is already in the pipeline -- this isn't so bad for the VR4300, which only has a 5-stage pipline and is in-order scalar) and in bulk (so you can minimize the number of cache flushes needed). The more modern the CPU, the worse self-modifying code is in general. Also, if you need to do a ton of BS to get the compiler to output a specific sequence of instructions, perhaps you should just code the function or performance critical piece of code part in assembly. In the example of atan2s, it already has large portions of assembly, so making the code less fragile is probably worth it. For the struct Surface, 0x16 isn't 4 byte aligned, so the compiler should automatically insert the padding, so the two structs should have exactly the same layout, which agrees with the offsets in the code you show onscreen (although it could be more clear to explicitly show the padding).
yeah i'd love to just write it in assembly, but i find it to be a huge pain to write inline assembly under GCC. it never seems to work right beyond small snippits. yeah i added the padding on the left side manually just to showcase this better. (although there is probably some compilers that would realign the struct on the right to make the struct smaller? or at least, there should be some settings that do that automatically for you)
@@KazeN64I'm pretty sure that "setting" is just the packed attribute ( __attribute__((packed)) ) which will place the struct members such as they occupy the least amount of space (byte-wise) even if it breaks type alignment rules (like ints needing to be 32bit aligned)
You just exploded heads of many cleancoders 🤯😜 In 8-bit assembly were not caches but pressing every cycle form hardware was a norm 😂 Nice tricks by they way 💪
Some really neat tricks in this video, but dude, honestly at a certain point it might be worth just hacking gcc to add a pass that does some of these transformations for you since they’re so mechanical 😂. Or even just a write preprocessor that takes sane C and spits out optimized C (you could do this with python’s pycparser for instance).