Тёмный
Low Level Game Dev
Low Level Game Dev
Low Level Game Dev
Подписаться
Hipity Hopity your Subscribe is my property!
Switch IS NOT FASTER than if, (in C++)
11:39
14 дней назад
I added Crafting to my Minecraft Clone!
8:02
21 день назад
Why I write C++ like it is C?
9:28
Месяц назад
What C++ Project I would do as a beginner!
4:05
2 месяца назад
Adding Physics to my C++ Minecraft Clone!
7:05
2 месяца назад
How I use CMake with Visual Studio
9:58
3 месяца назад
There are Random Numbers in Computers!
9:46
4 месяца назад
How to start Gamedev in C++
5:58
4 месяца назад
I Remade Minecraft But It is Optimized!
9:39
4 месяца назад
Beginner Multi-player C++ Game Code Review
1:05:45
4 месяца назад
C++ tutorials be like
3:16
4 месяца назад
making Multi-Player Minecraft in C++ is HARD!
10:23
5 месяцев назад
I made Mario Kart in C++!
5:14
6 месяцев назад
How Much Money Did my Steam Game Make?
5:50
6 месяцев назад
Start making C++ Multi-Player Games! Tutorial
15:11
6 месяцев назад
Комментарии
@thomas-sk3cr
@thomas-sk3cr 11 минут назад
This is correct. Unfortunately, beginners are not able to distinguish between optimization and non-pessimisation. So I would recommend that they don't attempt either of them. If you are still learning C/C++, just write code that works and that you can understand.
@narzaru
@narzaru Час назад
The simpler and clearer you write code for people, the easier it is for the compiler to optimize this code (but not always). Before any optimizations of processor or memory usage, do a benchmark, evaluate the contribution of costs against the overall background, and only then proceed with optimizations.
@SuperLlama88888
@SuperLlama88888 2 часа назад
Tiny, tiny optimisations are a happy coder problem
@moestietabarnak
@moestietabarnak 3 часа назад
So, without optimization, switch are faster, with they are equal --> use switch and it work fine even when not optimized Also, when I see a switch and I'm looking for a specific case, i got right to the switch label to read the instructions, If I see a bunch of IFs ... I have to read ALL IF to make sure it does not do something else than the switch
@JosephBlade.
@JosephBlade. 3 часа назад
I think if is better because you can generally guess mostly statified condition and place other stuff in else 😂 I didn't test this but it would work I suppose
@PopescuAlexandruCristian
@PopescuAlexandruCristian 3 часа назад
FPS as a perf measurement.... Please please don't.
@JH-pe3ro
@JH-pe3ro 3 часа назад
If you are a beginner scanning these comments, and really want to learn optimization as a general principle and "when" it intuitively makes sense, what you should do is seek out a very constrained platform so that it becomes an ongoing concern that you engage in so often that you will be relieved not to have to chase after it. This naturally leads towards looking at retro systems, but it can be intimidating and alien to do that since you don't get to use the same toolsets, and development will feel slow and filled with non-transferrable platform knowledge. However, I believe that's also part of the exercise: optimization is a thing done for a specific machine and scenario, and when you leave that context you are now speculating on what is fast instead of experiencing it. Optimizing in modern systems is complex in part because we've filled our systems with speculative elements that try to automatically do the right thing, from the CPU power regulation upwards. That makes it confusing and unintuitive. On a small machine running a small program, "use fewer cycles and fewer bytes" basically works, but as a codebase becomes larger, more and more of the total execution time is spent in a tiny proportion of "hot" code, so the useful places to optimize become more concentrated: the majority of the work is in just avoiding making things slow, versus actively making things fast. This won't feel satisfying if you are anxious about speed, and that's why you should seek out a contrasting experience where it's a bigger priority. It doesn't have to be a complete game project, just a graphics demo will do.
@LeReubzRic
@LeReubzRic 5 часов назад
erm aCKsHuALly 30 x 2.81 = 84.3 not 90 🤓☝️
@maxis_scott_engie_maximov_jr
@maxis_scott_engie_maximov_jr 6 часов назад
You're wrong Optimization isn't the root of all evil _Anything_ is the root of all evil _Cries in rewriting the engine 3rd time in a row cuz it's a mess every time I look at it_ Also funny thing about optimization: I had a function that draws a 20x20 box in CMD, it took just over 1ms to render Simple two loops to iterate over vertical and horizontal boundaries and draw them Then I made a 360d Line Drawing algorithm and replaced the function with it, expected for it to be slower The time decreased to 0.5ms for the same box.
@lowlevelgamedev9330
@lowlevelgamedev9330 5 часов назад
doing abstractions over your code is also a form of optimization, but you are optimizing your code reusage instead of the code speed. And it can lead to having to rewrite your stuff. Also for the CMD stuff, the cmd is painfully slow unofrtunatelly. 1ms is an eternity but cmd is very slow so idk what happened there. Maybe you added an endline instead of ' ' that can indeed make it slower. idk but it can't be because of the loop iteration
@nepnepnep3381
@nepnepnep3381 7 часов назад
But what about optimization by just using STL algorithms everywhere you can. Personally when I write code and remember that there's a perfect STL function that solves my problem i rather use it, even though it may seem hideous and not simple at all, but I believe that people behind their implementations know their stuff. Also while writing a medium/large scale project I found myself not designing it because I thought it doesn't matter as long as things work, which in the end lead me to redisigning it and trying to tie everything together because I was just sick of it. I believe that having this "premature optimisation" mindset is beneficial in making those good design choices in the beginning, which will make project just write itself later. I know that I'm missing the point of this video with this one, but I know that I've been on both sides of the mindset and it's not so clear when it's okay to be pessimistic and "prematurely" optimize and not okay.
@freziyt223
@freziyt223 8 часов назад
For example doom was revolutionary in 3d fps, when it came, pc didn't have even 1/10 of todays pc power, and they didn't even do much optimization, but this game is still game in the hall of fame
@mintx1720
@mintx1720 8 часов назад
This is the most convoluted rust w I ever heard
@RenderingUser
@RenderingUser 8 часов назад
Not really an option in rust
@wolpumba4099
@wolpumba4099 9 часов назад
*All OpenGL Effects: A Comprehensive Overview of Graphical Techniques* * *1:10** Wave Simulations:* Simulating waves by shifting the positions of triangles in a water plane. An easier method using DUDV textures is discussed later. * *1:42** World Curvature:* Applying a simple formula after the view matrix and before the projection matrix to create a world curvature effect. * *1:53** Skeletal Animations:* Briefly explains skeletal animation using bones, vertices, keyframes, and interpolation. Suggests further learning resources. * *2:23** Decals:* Discusses decals as details applied on top of geometry and mentions optimizations used in game engines like Doom Eternal. * *2:43** Volumetric Rendering (Clouds):* Explains volumetric rendering for clouds using a 3D matrix and ray casting. * *3:05** Geometry Culling (Frustum Culling):* Introduces frustum culling as an optimization technique to discard non-visible triangles, using bounding boxes and algorithms like octrees. * *3:53** Level of Detail (LOD):* Using multiple versions of a model with different triangle counts depending on distance for optimization, mentioning Unreal Engine's Nanite system. * *4:16** Tessellation Shaders:* Increasing a model's triangle count using tessellation shaders, beneficial for procedurally generated geometry and displacement mapping. * *4:39** Geometry Shaders:* Briefly explains geometry shaders and their capabilities, but advises against using them due to performance concerns. * *5:18** Geometry Buffer:* Using a massive buffer to store all geometry and compute shaders to calculate what needs to be drawn, as seen in Doom Eternal. * *6:46** Normal Mapping:* Adding normal information to a texture to create detailed surfaces even with flat geometry. * *7:13** Light Maps:* Using textures to specify specular components of materials. * *7:25** Lens Flare:* Implementing lens flare using 2D textures and mentions advanced techniques for more realistic simulations. * *7:51** Sky Box (Atmospheric Scattering):* Generating dynamic skyboxes using atmospheric refraction formulas. * *8:02** Fog:* Blending the rendered scene with the skybox based on fragment distance to create fog. * *8:11** Chromatic Aberration:* Shifting color channels slightly to simulate chromatic aberration. * *8:30** Physically Based Rendering (PBR):* Briefly introduces PBR and the rendering equation. * *8:58** Image-Based Lighting (IBL):* Using image-based lighting with the skybox for ambient light, noting its limitations in indoor scenes. * *9:22** Multiple Scattering Microfacet Model for IBL:* An easy addition to a PBR pipeline for better ambient light calculation, accounting for secondary light bounces. * *9:47** Global Illumination:* Discusses the importance of global illumination and its challenges in rasterization engines. * *10:12** Spherical Harmonics:* Explains spherical harmonics as a method to approximate global light information. * *10:36** Light Probes:* Using light probes with spherical harmonics or IBL to capture light information in different parts of the scene. * *10:52** Screen Space Global Illumination (SSGI):* Introduces screen space global illumination as an approximation technique. * *11:07** Ray Tracing:* Briefly mentions ray tracing for achieving high-quality global illumination. * *11:28** Subsurface Scattering:* Explains subsurface scattering, using examples of wax and skin rendering. * *11:51** Volumetric Rendering (God Rays):* Rendering god rays by simulating light hitting dust particles, mentioning easier screen space techniques. * *12:06** Parallax Mapping:* Creating a 3D effect by sampling textures from slightly different coordinates, used in God of War's dynamic snow. * *12:32** Reflections:* Using IBL, omni-directional maps, and screen space reflections for different levels of reflection quality and complexity. [From Comments] Mentions bending normals for better specular occlusion. * *13:15** Refraction:* Simulating light bending as it passes through different mediums, with examples of single objects, water, and DUDV textures. * *13:50** Diffraction:* Briefly mentions diffraction and its application in rendering shiny surfaces like CDs. * *14:06** Screen Space Ambient Occlusion (SSAO):* Explains SSAO as an approximation of global illumination, along with variations like HBAO and SSDO. [From Comments] Mentions GTAO as an alternative. * *15:12** Bloom:* Simulating bloom by isolating bright light spots, blurring, and adding them on top of the original image. * *15:50** High Dynamic Range (HDR):* Encoding a wider range of light intensities using a function that compresses the values. * *16:50** HDR with Auto Exposure:* Adjusting exposure based on average luminosity for a more realistic and cinematic look. * *17:07** ACES Tonemapping HDR:* Recommends using the ACES tonemapping function for better HDR results. [From Comments] Suggests AgX Tonemapping as a better alternative. * *17:29** Depth of Field (Bokeh):* Simulating the out-of-focus effect, mentioning Bokeh for creating specific light effects. * *17:49** Color Grading:* Using lookup tables to adjust the final colors of a scene. * *18:33** Shadows (Basic, PCF, Optimizations):* Covers basic shadow mapping, percentage-closer filtering (PCF) for soft shadows, and various optimizations. [From Comments] Mentions exponent shadow mapping (ESM) and EVSM for better shadow quality. * *20:11** Variance Shadow Mapping (VSM):* Using statistical methods for soft shadows, mentioning light bleeding as a potential artifact. * *20:29** Rectilinear Texture Wrapping for Adaptive Shadow Mapping:* Morphing the shadow texture to improve precision in relevant areas. * *20:53** Cascaded Shadow Mapping / Parallel Split Shadow Maps:* Splitting the view into multiple regions with separate shadow maps for better shadow quality in large scenes. * *21:34** Transparency:* Discusses different techniques for transparency, including alpha discarding, sorting, and order-independent transparency methods like depth peeling and weighted blending. * *22:26** Order Independent Transparency:* Mentions various complex techniques for order-independent transparency. * *23:33** Rendering Many Textures (Mega Texture & Bindless Textures):* Discusses challenges in rendering many textures and solutions like mega textures and bindless textures. * *24:31** Anti-Aliasing (SSAA, MSAA, FXAA, TAA):* Covers various anti-aliasing techniques, including super-sampling, multi-sampling, fast approximate anti-aliasing, and temporal anti-aliasing. [From Comments] Mentions that MSAA can be used with deferred rendering, contrary to popular belief. * *26:00** DLSS:* Introduces Deep Learning Super Sampling (DLSS) as an AI-powered upscaling and optimization technique. * *26:35** Adaptive Resolution:* Dynamically adjusting resolution for performance optimization. * *27:05** Lens Dirt:* Applying a texture on top of the screen to simulate lens dirt. * *27:27** Motion Blur:* Using a velocity buffer to create motion blur. * *27:41** Post-Process Warp:* Using shaders to create post-processing effects like screen warping. * *28:08** Deferred Rendering:* Explains the concept of deferred rendering and its advantages and disadvantages. [From Comments] Mentions that motion vectors are relatively easy to implement. * *29:29** Tiled/Clustered Deferred Shading:* Introduces variations of deferred rendering for handling many light sources. * *29:42** Z Pre-Pass:* Optimizing forward rendering by using a depth pre-pass to avoid unnecessary fragment calculations. * *30:01** Forward+ (Clustered Forward Shading):* Combining forward rendering with clustered light grouping for optimization. I used gemini-1.5-pro-exp-0801 on rocketrecap.com to summarize the transcript. Cost (if I didn't use the free tier): $0.12 Input tokens: 27480 Output tokens: 1815incorporating clarifying and supplementary information from the comments. It provides a valuable resource for anyone looking to learn about various graphical techniques and effects used in OpenGL and other graphics APIs. Remember to check out the linked resources mentioned in the video and comments for more in-depth explanations and tutorials.
@egedq
@egedq 10 часов назад
software gets worse every year. 90% of software written never gets optimized and you decide optimization is the problem you want to warn beginners about? fuck you.
@disabledmallis
@disabledmallis 10 часов назад
Shrimply dont
@DotNetDemon83
@DotNetDemon83 11 часов назад
Also: quit trying to do the compiler’s job
@lowlevelgamedev9330
@lowlevelgamedev9330 9 часов назад
there are still people tho whose job is to do that tho. The compiler isn't perfect
@Oaisus
@Oaisus 12 часов назад
Insulating newbies from the idea that optimization is important will let the form bad habits that will hard to break
@davidcauchi
@davidcauchi 12 часов назад
2:40 something is up with the ms value here. 240 fps should be around 4.1ms correct? Looks like it’s in seconds not ms.
@davidcauchi
@davidcauchi 12 часов назад
I say that because ms is a much better metric imo. But bigger reason is I fixed the same issue in my code yesterday haha.
@JumpercraftYT
@JumpercraftYT 13 часов назад
This video was awesome!!! Also, im doing texture arrays. When i test it on one computer it works but on the other the textured dont show up
@mementomori7160
@mementomori7160 13 часов назад
THIS! Thank you, I always begin to optimize my code before it's even done and working, it happens against my will and have to stop myself every time
@noxagonal
@noxagonal 12 часов назад
Me too. It's fun at first, but then you spent your time doing that and now you don't have anything that works, and that's not fun.
@Jarikraider
@Jarikraider 13 часов назад
I pretty much only use switch statements when I'm feeling all fancy.
@BudgiePanic
@BudgiePanic 13 часов назад
This is why planning is key. If the plan calls for only 3 items, there’s no point making some generic optimised inventory system that can support a million items and run in O(1) time
@monoastro
@monoastro 14 часов назад
based alllman style programmer
@kono152
@kono152 14 часов назад
when playing final fantasy 7, i found myself thinking "hm i wonder if this game has any super crazy low level optimizations", but the fact of the matter is that there is most likely none of that because compilers are just so good at optimizing for you, after all, its the main reason we use compiled languages afaik
@FodaseGoogreorio-h7v
@FodaseGoogreorio-h7v 15 часов назад
Optmization is not for computers, is for programers.
@Hazanko83
@Hazanko83 15 часов назад
The first example you show isn't an optimization that "ended up being slower"... It's a bug that surprisingly didn't crash the program lol. It's slower because it's checking past the end of the array into what is now other data, but still treating it as the previous type....
@Jeremyak
@Jeremyak 15 часов назад
I'm optimizing RIGHT NOW and there's nothing you can do about it! 😏
@Shelpface
@Shelpface 16 часов назад
Когда я начинал делать игру на FNA, я думал, что заниматься оптимизацией мне придётся ещё не скоро, ведь игра то простенькая. И очень удивился, когда увидел 15 fps. Это 2д top-down игра с процедурной генерацией. Мир в ней генерируется полностью за раз. Так вот, я сделал генерацию и всё было в порядке, однако, стоило мне немного увеличить мир, как fps просаживался. Я понял, что в игре отрисовываются даже те тайлы, которые не видны на экране, поэтому решил сделать так, чтобы отрисовывались только те, что должны быть видны. Я это сделал, это сработало. Однако... Когда я был не дома, хотел поработать над игрой на своём ноутбуке, который гораздо слабее моего основного ПК. Тогда я заметил, что, fps хоть и повышается, когда я отрисовываю только нужные тайлы, всё равно, чем больше мир, тем менее плавно работает игра. Какое-то время я не мог понять, в чём дело. Потом до меня дошло озарение, что виноват цикл, перебирающий абсолютно каждый тайл, чтобы проверить, близко ли он к камере. Я начал думать, что можно сделать, и решил разделить мир на чанки. (Что забавно, я не стал менять способ генерации, а просто начал разбивать на чанки уже сгенерированный мир) Сначала я планировал уменьшить количество переборов циклом. Мир 1000 на 1000 с чанками размером 10 на 10, перебирался бы куда меньшее количество раз, если перебирать чанки. Но потом я понял, что от лишних переборов можно вообще избавиться, если по позиции камеры понять, в каком она чанке. Так я начал отрисовывать только чанк, в котором камера, и близлежащие к нему чанки. Это позволило поднять fps в мире 10.000 на 10.000 с 2, до 2000. Ха-ха, конечно, я думаю видео немного о другом, потому что мои проблемы были вызваны большой неопытностью, но всё-таки, мне хотелось поделиться своей историей с оптимизацией 😃 Upd: после этой истории я понял, почему в Майнкрафте используют систему тиков, лол
@ABaumstumpf
@ABaumstumpf 16 часов назад
On the point of smart-pointers you are partially wrong - and in parts completely wrong. If you only need a singular instance right there in a specific function than a smart-pointer most certainly is the wrong choice. Buuuut if you actually have an entity that you need to pass around - and like for the player you want a unique handle for that - then a unique-pointer is absolutely the right way to go. The only thing that is "slow" about that is the initial creation but access is not slower. And shared pointer is not for sharing data but for sharing OWNERSHIP. If you are using it to share data then you are fundamentally misusing them. Your uses of "new" are also not needed: The chunks should be constructable inplace so you can use "recievedChunks.emplace_back()", the file-Data is a perfect candidate for using std::array, std::vector or std::valarray, same goes for the "char *reszult".
@ABaumstumpf
@ABaumstumpf 16 часов назад
With undefined behaviour, specially with the hardcoded out-of-bounds write, the behaviour is way way way worse than you are describing here: One of the core principles of compilers for optimisation is that it can assume that no undefined behaviour can ever occur (duh - cause it is not defined). This does not mean that when there is undefined behaviour that the compiler will do stupid things but rather that the compiler checks what the pre-conditions are for your program to not display UB. With the buffer-overflow in your example it tells the compiler that this line of code CAN NOT be reached - ever - so the compiler is free to go back the call stack until it hits a controlflow instruction and it can assume that it will not have a value that will lead to your function to be called. This can lead to "funny" things: "if(rand() == -1) { std::cout << "rand() which returns a positive number now returned negative?"; } else {MyFunctionThat6LayersDeepHasUB();} " Good luck figuring out the cause of that. And the best part? If you have a "char[32] buff" and access it via a function-parameter "void brokenFunc(int index, char value){ char[32] buff; buff[index] = value; }" then you can try adding some print-statements that tells you that the index it out of bounds - and the compiler will happily delete that print-function cause it "knows" that the print would only happen if the program has Undefined behaviour - which is not allowed - so it "knows" that the print can never happen.
@K9Megahertz
@K9Megahertz 17 часов назад
Is there no bounds checking being done on the strings? What the? I had to go look at the code from this book and this looks so broken to me. If I'm looking at the right chapter anyway. I see it's returning false from the Compare1 function to signify that the strings are not equal as indicated by the if (s1 == s2) line to compare pointers to check if both are not pointing to the same object. So false = strings are equal If res is not 0 meaning the return from the inner compare function (which I do not see the code for) concluded the characters being compared were different. So why not just do "return true;" instead of "return res > 0;"? Removes the unneeded comparison. Are these strings passed in guaranteed to be equal length? Even with that now that I think about it, there's no way to terminate the loop... You're going to get Access Violations/Seg faults with this code. Oh, you don't need two counters for the index. Just use one counter. Could probably do something like while (*s1++ == *s2++) anyway.
@K9Megahertz
@K9Megahertz 16 часов назад
Here's my version: bool compare(const char* s1, const char* s2) { //if strings are pointing to the same memory if (s1 == s2) return true; //as long as we haven't hit the end of either string while (*s1 != '\0' && *s2 != '\0') { if (*s1++ != *s2++) //if they're different, return false return false; } // If we reached the end of both strings, they are the same // if we only reached the end of one, then they're different return *s1 == '\0' && *s2 == '\0'; }
@ABaumstumpf
@ABaumstumpf 17 часов назад
Commented functions..... Often i whish there was a comment, but most of the time it is something like a function "public Date getDate();" that gets a comment. Nobody needs that.
@markminch1906
@markminch1906 17 часов назад
So I want to talk about the example function at the beginning of the video. Beyond optimization I don't think its ever a good idea to remove that length check from the comparison, even if you know that a comparison will fail if it goes beyond the length of the original string, we dont have guarantees on how the string will be used inside of an actual system such as if some one uses something that isnt a sub string of the original string this could cause all kinds of errors. also you don't need i1 and i2 they are always the same values when doing comparison so it just a waste to have two, not for performance reasons but because it makes it unclear what your doing
@markminch1906
@markminch1906 16 часов назад
now with the unsigned int part of the code I am a bit confused about how it ended up being only 200ms, if this is caused by overflow errors then the expected behavior would be that if it overflows it would loop forever and would never exit the for loop because they would end up checking the same spot over and over since in this case the roll over behavior would it would go from its max value back to 0 and count back up But I also wonder how accurate this video is at all, punching in the same code into compiler explorer I almost always get less instructions in second version of compare than the first, which normally is a good sign that one piece of code is faster.
@markminch1906
@markminch1906 16 часов назад
There is also a lot wrong with the code other than the optimizations, rather than unsigned int size_t should be what is used which I believe is a int long long, because this fits the behavior pattern for what computers recognize as on of the longest piece of data that you can have on that computer architecture. along side that the function comparison 1 returns false if the two strings are the same, which is the opposite of what most people expect, then there is the if(ret != 0) return ret >0 this line is mostly pointless we know ret is greater than 0 just return true no need for the extra logic, once again this probably doesn't change speed because the compiler will most likely remove it but you make what is happening less clear to the reader with unneeded logic
@ABaumstumpf
@ABaumstumpf 17 часов назад
i am doing a lot of this early optimisation: Right now i am refactoring some code that has 3 deep loop. On my machine with my current test-data it is only ~200ms ... sadly i know that the real system have in excess of 5000 elements - rather than my 3. And i'd rather have code i can read than this giant 500 line kraken with continue/return statements sprinkled throughout. But darn - somebody already complained to me that i would be wasting memory cause i included 3 !!! copies of an ID-value in the function..... yes, 3 64bit variable in a function that does n^3 computation on >5000 complex (dependency graphs) objects.
@jannegrey593
@jannegrey593 17 часов назад
My initial thought was that compiler makes them similarly in the end or even that they are equivalent. Not that I know that much about coding or compilers. I used to code a bit over 20 years ago and I watch your videos from time to time. EDIT: So I watched the video and it seems I was in the ballpark. Personally I mostly wrote quite simple code and to me switch was more readable and easier to use when there would be more than 3 cases. Not that it mattered for the code I was writing since it still took milliseconds to execute ;)
@edmundas919
@edmundas919 17 часов назад
I prematurely optimize not to achieve better performance, but because that's my default coding style. Example: - constexpr when possible; - allocate objects on stack when possible; - prefer std:array over std::vector and std::vector over other containers, unless container is large and elements needs to be removed from begin/middle; - reuse heap allocated objects when possible; e.g. network read/write buffer; - one line member functions define in it's class definition, so it would be inlined; - large objects or objects who allocates memory on heap internally, pass around as const &/*; - RVO/NRVO; - if needed to loop over large amount of data, maybe use struct of arrays (SOA) instead of array of structs (AOS); - avoid interfaces when possible; Of course i would not sacrifice code readability for questionable performance gains.
@StriderPulse599
@StriderPulse599 17 часов назад
You should also talk about FPS. People very often ignore improvements like 10 from 11 FPS (90ms difference), but panic when uncapped game goes from 600 to 500 (0.3 ms difference). And other common problems like running all logic every frame along rendering, not using tools like Nvidia Insight to see what is actually going haywire under the hood via frame debugger, and not resetting chrono properly (start value taken right before while loop, end value including cost of outputting all debug info), etc
@mytech6779
@mytech6779 18 часов назад
I'm glad you included the first type and the third type for contrast. Some early structural choices can limit later options, important operations in a game may be very latency sensitive but low throughput, versus some program that does scientific number crunching which is not affected by latency but requires high throughput per joule. Even selecting the language can be considered an initial optimization yet could not be considered premature. Language selection could also be in the pessimization category, if you start with a slow highly abstracted language then writing the prototype code may be much faster and have fewer obvious bugs, but eventually you may have complications when optimizing.
@leshommesdupilly
@leshommesdupilly 18 часов назад
One time I refactored all my code base by replacing every i++ with ++i
@leshommesdupilly
@leshommesdupilly 18 часов назад
One time I refactored all my code base by using const int& for every function arguments instead of just int
@leshommesdupilly
@leshommesdupilly 18 часов назад
Yes, I also used std::list everywhere at the time lol
@leshommesdupilly
@leshommesdupilly 18 часов назад
One time I wrote: struct { unsigned int a: 16; unsigned int b: 16; }
@GenericInternetter
@GenericInternetter 18 часов назад
"Premature Optimization" does not exist! There is only: Proactive Optimization - designing things effectively before you start building Reactive Optimization - refactoring tested parts to make them perform as expected Unnecessary Optimization - optimizing things that don't need to be optimized, or things that you don't need at all Aim for proactive, be prepared for reactive, avoid unnecessary.
@Franschisco
@Franschisco 18 часов назад
Im not a developer by any means but having to deal with grass caches in skyrim mods and constantly updating it each time a plugin touches it just to add "free" fps makes your point sooo true and valid
@RawFish2DChannel
@RawFish2DChannel 18 часов назад
Yep very true, people like to argue on which piece of code is faster but almost never measure it's performance, and also like to say stuff "oh if this function gets called 1 million times it's gonna be slow". I think it's called "Bike Shedding". The reality is that sometimes people forget that they need to optimize code after they measure that's it's actually slow. I'm unhinged fanatic when it comes to code optimization, but I still don't waste my time on optimizing code that doesn't need it. I use this mentality when I'm coding: 1. ooga booga working code first 2. if code is too slow, I measure which part of it is slow (this is very important) and optimize it (and fix bugs, ugly parts etc) 3. repeat first step Also sometimes it takes few iterations of code to make it fast and working, like face culling code for example, it was easy for me to make it fast, but it was producing wrong results Very good video btw👍
@SpiraSpiraSpira
@SpiraSpiraSpira 18 часов назад
one would think that the compiler would obviously optimise something as simple as a switch or nested if statement to the fastest version by itself. this isn't the 1980s
@d4r1u5-pk7zr
@d4r1u5-pk7zr 19 часов назад
esti foarte bun, chiar dupa 2 ani de cpp si 1 de python inca mai am de invatat de la tine (is baiatu cu geamu)
@hmmmidkkk
@hmmmidkkk 19 часов назад
Does having whole OOPS structure with singletons and App.run() like thing hurt performance? because it looks really unnecessary to me to make it whole OOP and it looks as if we're making it harder for the compiler to understand it and optimize it
@lowlevelgamedev9330
@lowlevelgamedev9330 19 часов назад
it is making it harder for the compiler but since it is a small thing the performance hurt won't be visible. However it is a case of code pesimization and I don't do things like that because as you said they are unnecessary and bring absoljtely no benefit to my code so don't do it 💪
@gigas3651
@gigas3651 19 часов назад
use Rust and optmize everythingg, thats what i do
@abdullahatesli2338
@abdullahatesli2338 19 часов назад
i mean is it not better to optimize it from the start. minecraft was a simple game with not that much blocks entities and other things. So there was no reason to optimize it. But now Mincraft much more complex and they will never optimize it good because it just way more harder than before.
@lowlevelgamedev9330
@lowlevelgamedev9330 19 часов назад
depends on what you optimize. If you optimize important things yes but beginners make bad choices
@eagle32349
@eagle32349 19 часов назад
It's not about the maturity of the optimization, it's about the quality of the "optimization". (Aka algorithm design, and its quality)