Тёмный

Intrinsic Functions - Vector Processing Extensions 

javidx9
Подписаться 318 тыс.
Просмотров 125 тыс.
50% 1

Наука

Опубликовано:

 

4 окт 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 377   
@javidx9
@javidx9 4 года назад
I will also add that branching can stall a CPU, particularly as processors attempt to "guess" which bit of code will be executed next. If it guesses wrong, it has to effectively "go back", so removing branching is a good strategy for optimisation.
@Astravall
@Astravall 4 года назад
@javidx9 ... hmm did you ever calculate _c in your code example? Well it is likely in the git repository ;) but in your video i think that part is missing (e.g. at 54:36 ). I just see comments on what you want to achieve ... or did i overlook that part? Nevertheless a cool video, as a long time ago i programmed in Assembler but nowadays i'm relying on the C#-Compiler ;).
@dieSpinnt
@dieSpinnt 4 года назад
C++ 20 brings us [[likely]] and [[unlikely]] that may help to fix a branching conflict. See Jason Turner on this topic at ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-ew3wt0g99kg.html Thank you for the educating video javidx9. Stay safe. P.S.: Isn't it nice that meat-bags(humans) are still useful for optimization work and making videos?:)
@notnullnotvoid
@notnullnotvoid 4 года назад
@@dieSpinnt It's worth noting that the [[likely]] and [[unlikely]] tags (or the equivalent compiler-specific markup you would have used prior to C++20, such as __builtin_expect) can't really directly help the CPU to predict branches better, they mainly help make correctly-predicted branches perform better, by hinting the compiler to, for example, reorder branches to reduce the overall number of jumps in the expected code path, or improve the cache locality of the expected code path by laying it out contiguously in memory, or to decide whether to use a branch vs. a cmove.
@Bvic3
@Bvic3 4 года назад
Why is it so hard to find resources about the incredible branching prediction of processors ? I only saw it mentionned in a talk by former Intel/Tesla chief processor architect Jim Keller. It's not just predicting what will be used next, but parallelising automatically by finding independant pieces, like initialising variables can be done before the function is called! It seems that there is a processor inside the processor doing those predictions live depending of the current run time and other threads from other programs. The firmware can optimise machine code live, not following the .exe machine code. And Intel wants to use neural networks to predict branching. That's how they manage go make code run faster without increasing clock speed. Also, there are professional grade Intel compilers with licence prices higher than consumer processors that make much more advanced optimisation than the generic GCC compiler. It seems such a fascinating topic, but surprisingly secret.
@jon9103
@jon9103 4 года назад
@@Bvic3 if you're interested in how branch prediction works, you might want to read the Wikipedia article en.m.wikipedia.org/wiki/Branch_predictor. If you look at the reference section you'll find that much if the theory is freely available, what's secret isn't usually really how things work, rather its all the work that goes into implementing something that can actually put it into practice and be competitive. As to the Intel compiler vs GCC, a lot of that is marketing, sometimes Intel does better, sometimes GCC does, it really depends on specifics (i.e. what code is being compiled, how is performance being measured, what system is it running on, what version of the compiler, what compiler options were selected, etc.) Naturally its easy for Intel marketing to cherry pick scenarios that put their compiler in the best light, so its important to understand that your results will vary.
@nikola7377
@nikola7377 4 года назад
The most handsome C++ guy that ever walked this planet
@DlCartof
@DlCartof 4 года назад
if u like javix check out chilli tomato noodle too, for some more sweet c++ 😃
@mjthebest7294
@mjthebest7294 4 года назад
Javidx9 and ChiliTomatoNoodle are surely the best C++ teachers I ever had. :)
@maddjhdhdhdhd6917
@maddjhdhdhdhd6917 4 года назад
The cherno is also a great guy
@leocarvalho8051
@leocarvalho8051 4 года назад
Theres also the chinese guy i dont remember the name and Jason
@92309858
@92309858 4 года назад
leo carvalho Thomas Kim or Bo qian?
@NeilRoy
@NeilRoy 4 года назад
*head explodes* - I see a lot of basic programming videos online with all the usual fair, and they are very nice. But it's refreshing to see more advanced topics like this covered, and covered so well.
@adamjansadowski535
@adamjansadowski535 4 года назад
@@esepecesito bisqwit
@nickscurvy8635
@nickscurvy8635 2 года назад
Fr
@hu-ry
@hu-ry 4 года назад
OMG HE HEARD OUR BEGGING FOR MORE SIMD COVERAGE! Blessed shall you be, you immortal being :D
@whirvis
@whirvis 4 года назад
Quite the intrinsic video! I haven't even watched the video long enough to know what it means, but I wanted to use that adjective! :)
@luisendymion9080
@luisendymion9080 4 года назад
Good one lol
@achtsekundenfurz7876
@achtsekundenfurz7876 3 года назад
That's a perfectly cromulent adjective!
@richardbloemenkamp8532
@richardbloemenkamp8532 4 года назад
Both your C++ and your teaching skills are absolutely excellent! They should give you a Bjarne Stroustrup Award.
@RichBoud1
@RichBoud1 4 года назад
I was watching this when I couldn't get to sleep. It is so fascinating that I kept watching and watching. It didn't help me get to sleep at all ;-). Thanks for a great lesson.
@tusharsankhala9521
@tusharsankhala9521 4 года назад
Please keep this series of explaining parts used in C++ SIMD to continue, your way of explaining its awesome, Thanks for putting such a high quality content out in public.
@Schwuuuuup
@Schwuuuuup 4 года назад
That was great - and now CUDA ;-)
@Mozartenhimer
@Mozartenhimer 4 года назад
Then PTX assembly.
@guiorgy
@guiorgy 3 года назад
Recently I had a C# code that would take about 50 minutes to execue and calculate. Running parallel got it to about 5 minutes. Using OpenCL (kinda like CUDA) got it a little under 10 seconds xd Edit: And yes, I did run the code for 50 minutes xd
@Schwuuuuup
@Schwuuuuup 3 года назад
@@guiorgy I wish I had the time to bring me up to speed with Cuda or Open CL, but besides a little bit of programming Arduinos I'm not a C programmer, and I struggle with basic concepts like 'const * char const' etc. I have a project regarding gamified genetic algorithm which I have done in Java years ago, and someime I have to recode in C, GPU computing and a powerful graphic engine
@aamirpashah7159
@aamirpashah7159 3 месяца назад
​@@Schwuuuuupwrite it like this const * char data; this will make more sense
@Schwuuuuup
@Schwuuuuup 3 месяца назад
@@aamirpashah7159 dude, my post is over 3 years old
@londonbobby
@londonbobby 4 года назад
A bit late to the party, but here goes... This video has inspired me to try SIMD programming. I have long been a fan of Mandelbrots and many years ago wrote a program to plot and explore them. Eventually I got myself a PC with an i7 processor and explored making my Mandelbrot program multi-threaded which worked well. Now is the time to upgrade it again with SIMD. Now my cpu is still the same i7 which does not support anything past SSE4, but then compiler of choice is Delphi 6 (don't judge), which completely does not support intrinsic functions at all. However it does have an in-built assembler which supports up to SSE2. So my task has been to translate all this C++ code into Pascal/assembler. I have eventually got this to work - a few radical changes were required - e.g. I only have 8 x 128 bit mmx registers to play with so only 2 pixels at a time, but the speed-up is amazing. My program is rendering full screen images in just a few hundreds of milliseconds (sometimes much less) where it was taking multiple seconds before. The most complex image so far has only taken slightly over a second to process. Thank you so much for explaining this in such simple terms that I was able to do this and learn about SIMD.
@javidx9
@javidx9 4 года назад
Hey that's great Bobby! SSE4 is no slouch, and I'm pleased you got it working to your expectations. I must confess I'd not considered the availability of intrinsics in other languages before, so this is quite interesting.
@Dave_thenerd
@Dave_thenerd 4 года назад
@@javidx9 C# Recently added Intrinsics via the System.Numberics namespace and they work pretty well. See: devblogs.microsoft.com/dotnet/hardware-intrinsics-in-net-core/ and: docs.microsoft.com/en-us/dotnet/api/system.numerics?view=netcore-3.1
@ddummer
@ddummer 4 года назад
Just watched Linus tech tips where Anthony mentioned "AVX 512" support on a new macbook and since I recently watched this video I could say "Oh yeah... I understand that... in depth." :)
@obinator9065
@obinator9065 3 года назад
Yeah thing is... AVX512 takes a way bigger CPU hit, not worth it.
@ademarsj
@ademarsj Год назад
Interesting, I watched the video and thought: "Wow, what a amazing teacher, full of content", then i subscribed and check the channel videos and realize that when i was in the beginning of graduation I visited that same channel to see start level content and now, almost finishing the course, here i am, seeing a more complex thing, moral of the history: The channel and his creator is both incredible. Thank you !!! Sorry for my poor english....
@inon4037
@inon4037 4 года назад
Exactly when I needed it! The timing couldn't be more than perfect
@Cyberspine
@Cyberspine 4 года назад
Thank you for this video. I took a CS course in parallel computing this semester, and it demystified a lot of what makes high-performance code tick. This video helped me to connect what I've learned with what is going on in an IDE like Visual Studio.
@jsflood
@jsflood 4 года назад
Great video, it went from totally cryptic gibberish code to understandable logical code thanks to your elite explaining. Thank you !
@javidx9
@javidx9 4 года назад
XD err thanks John!
@rperanen
@rperanen 4 года назад
Another great video and little trip to memory lane. Few years ago, I had to work image processing with older hardware which did not had any GPU acceleration and some algorithms had to be written with SIMD. After getting mind wrapped to work in vector-oriented mode the project was surprisingly pleasant to code.
@pythagorasaurusrex9853
@pythagorasaurusrex9853 3 года назад
Hell yeah! I tried those functions myself. Amazing tutorial. The speed gain is insane combined with using threads :) Thank you!
@valkarion9
@valkarion9 4 года назад
I will have a Computer Architecture exam next week and a significant chunk of the material is about SIMD extensions but since it's a university course it's all theory, so it's nice to see it in action.
@Mrav79
@Mrav79 4 года назад
So this eases the old school approach of having an __asm {} block to optimize what logic the compiler would not be able to do like we find in some older open sourced e.g. games engines, with organized instrinsic functions for exposing modern cpu instructions via modern compilers. Nice.
@Antagon666
@Antagon666 3 года назад
When I first looked on to the intrinsic code, I thought how complicated it was... But you explained perfectly, and something clicked, and I realized how easy it really is. Thanks to the avx, I'm getting double the performance on my Mandelbrot set renderer. The best thing is, it even works on multiple cores with OpenMP directive . The performance on CPU is as good if not better than on GPU.
@lincolnsand5127
@lincolnsand5127 4 года назад
I used to heavily use SSE2. Excited to see you cover AVX256
@truboxl
@truboxl 4 года назад
ohhhh.... that's why its called avx2 for short...
@ilieschamkar6767
@ilieschamkar6767 Год назад
​@@truboxlnow it makes sense to me as well even tho i wouldn't shorten something already short
@wowLinh
@wowLinh 4 года назад
Amazing as usual!! I am simply amazed by the quality of your videos, topics and explanations.
@javidx9
@javidx9 4 года назад
Thanks wowLinh - It always pleases me when I see you comment - you've been around a loooong time now XD
@jajwarehouse1
@jajwarehouse1 4 года назад
It would be very interesting to see this programmed for CUDA processing.
@judgeomega
@judgeomega 4 года назад
gpu optimization information is rare and valuable. i dont know if hed be willing to expose such secrets of the dark arts.
@michelefaedi
@michelefaedi 4 года назад
Simd is better than cuda in some cases. It don't need to transfer the data to the GPU and the loop is faster with simd(is complicated to explain why)
@karma6746
@karma6746 4 года назад
@@michelefaedi Oh but you do need to transfer data to the GPU anyways. GPU is the one that actually does the drawing, isn't it?
@michelefaedi
@michelefaedi 4 года назад
@@karma6746 only if you consider the graphics calculation. CUDA can do any algorithms you want. Even the one that don't require the video directly
@achtsekundenfurz7876
@achtsekundenfurz7876 3 года назад
Fractals and similar iterations sound like a close second to me. There's very little to be transferred into the GPU, and very little back out. Moving the heavy lifting into the GPU could be very profitable, even more so since modern GPUs tend to have 100s of cores, even the better consumer-grade models. Not exactly your everyday algorithm, but even if you want to save the data to disk, it looks very promising. If you don't, real-time animation in full HD is definitely on the horizon thanks to Cuda. For other stuff, it can be the other way around. Instead of freeing CPU cores, it could tie cores down with management duties (or even worse: tie ONE core and block the others out), which is probably a workload for which most modern OSes are not optimized (unlike processing in the CPU or pure output generation in the GPU).
@gosnooky
@gosnooky 4 года назад
I'm tired and I need sleep. Oh! A new javidx video.
@arcadely
@arcadely 4 года назад
Ha! And here it is: the SIMD video I asked for earlier today, along with plenty of others who asked before that, because I didn't check the post date on the brute forcing video. Great stuff!
@javidx9
@javidx9 4 года назад
lol thanks arcade, I was gonna say something earlier, but I figured you'd find it! XD
@Kollegah9997
@Kollegah9997 4 года назад
You sir are a beast! I'm a senior developer coding for 10 years, your knowledge is serios :)
@dorjderemnamsraijav5182
@dorjderemnamsraijav5182 4 года назад
Cant get enough of your videos javidx9! Love your videos man
@malstroemphi1096
@malstroemphi1096 3 года назад
I believe "pd" stands for "packed double" and not "parallel double"
@jayasribhattacharya2048
@jayasribhattacharya2048 4 года назад
You are just awesome. I have learned many things from your videos. 😍😀 thank you so much 😊.
@javidx9
@javidx9 4 года назад
Thanks Jayasri!
@LevPleshkov
@LevPleshkov 4 года назад
Probably the most valuable video on RU-vid so far!
@simonegiuliani4913
@simonegiuliani4913 4 года назад
You are very gifted at explaining things.
@benjaminshinar9509
@benjaminshinar9509 3 года назад
I will need to watch this again in the future.
@will1am
@will1am 4 года назад
By far the best video about this topic on youtube in overall. I only found much less detailed videos or way too detailed only on some specific parts. Cheers :)
@Dr10na1995
@Dr10na1995 4 года назад
So that is why these AVX flags are used in GCC! Thank you for the explanation :)
@Gabriel38196
@Gabriel38196 4 года назад
Thanks for what you are doing for the community javid.
@adamodimattia
@adamodimattia 4 года назад
Incredibly informative, the most hardcore but so enjoyable. Personally, I found masking not the hardest thing in it, instead it was the x positions and offsets, especially 52:04 - 52:12, what a... Fantastic stuff, thanks to your channel I really got more and more interested in more low level coding. The way you present it makes it much less scary, even the assembly code :)
@Z0MBUSTER
@Z0MBUSTER 4 года назад
I showed one of your video to my father to make him believe you were me, we look exactly alike, it took him a good minute to realise it was'nt me !!! We laughed so hard, keep up the good work =)
@javidx9
@javidx9 4 года назад
a doppelganger eh?
@laureven
@laureven 4 года назад
Is there space where we can give Ideas for the new videos (so we have a list) and then we can vote witch subject is most selected ...obviously this is Your channel and Your vote is final but one thing is certain. You have a gift and the way and Your voice is just in perfect balance: a very very good teacher. We are very lucky You have time for those videos.
@javidx9
@javidx9 4 года назад
Hi Marcin - kind of, but mostly no - On the discord we have a requests board, though fundamentally it requires that I feel confident enough about the subject matter to demonstrate it. I simply wont make videos about subjects I dont have a good understanding/experience of, they wouldn't help anybody! Also, I often disappoint people with timing of videos, since this is a hobby for me, it helps if the video i'm making is related to some project I'm working on at the time. In the case of intrinsics for example, I've been using them a lot in a different project which isnt a video, so its fresh in my mind. But always happy to see a comment from your good self, a long time supporter and I thank you for that!
@karma6746
@karma6746 4 года назад
Your ability to simplify complicated stuff borders on the divine - Thank You!
@darkobakula5190
@darkobakula5190 Год назад
As always, the best content one can find on RU-vid!
@spinthma
@spinthma 4 года назад
Thank you for the insights to programming with intrinsics!
@toma.a7146
@toma.a7146 3 года назад
It is nice to see more complicated stuff like this on RU-vid!
@darthxertor3617
@darthxertor3617 4 года назад
So THIS is an actual practical use of bit masks. Very good to know, thank you!
@hippzhipos2385
@hippzhipos2385 3 года назад
You are an absolute legend. I was wondering how much experience one needs to have to get that good
@leonbutlermusic
@leonbutlermusic 4 года назад
Excellent explanation
@JackPunter2012
@JackPunter2012 4 года назад
Great video as always! For those who want a more detailed look at the difference in timings for cache vs Memory vs Hard drive I recommend the talk "Getting Nowhere Faster" By Chandler Carruth at Cppcon 2017.
@notnullnotvoid
@notnullnotvoid 4 года назад
I'm not quite sure why you talked about cache locality when you did, as it's unrelated to the loop unrolling optimization. The cache behavior of the loop is the same either way - the reason it gets unrolled is just to reduce loop overhead (fewer compare and branch instructions per iteration). Other than that, this seems like a great video for introducing people to SIMD programming. Your explanations of lanes vs. register width, masking, and the utility of intrinsics in general, are all very clear, concise, and thorough. Good stuff!
@javidx9
@javidx9 4 года назад
Thanks Not Null - I kind of agree with you, I wanted to fit in locality somewhere, and there is some truth to unrolling being advantageous to cache usage, for the reason you describe in fact - aside from branching having its own overhead which you want to reduce, and of course branch prediction being a factor, the branch test itself could potentially pollute the cache. SIMD stuff works best when streamed, and there are in fact cache organisation intrinsic functions to hint where the data should be moved to before the next set of instructions. Streaming of course works best with contiguous data in memory, and typically such memory is moved around "together". Once that extension pipeline is fired up, you want to cram as much data through it as possible, so I dont agree that its unrelated, but I do concede it is secondary to chaos branching can cause.
@notnullnotvoid
@notnullnotvoid 4 года назад
@@javidx9 Doesn't the loop condition (at least in this case) just come down to a compare instruction and a conditional jump on the relevant flag bit? I don't see how that would pollute the cache, but I might be missing something.
@javidx9
@javidx9 4 года назад
@@notnullnotvoid On powerful processors such as desktop ones, its not quite that simple. Yes, the condition is based off a single bit, but 2 things, firstly the pipelined nature of the processor requires branch prediction, and flushing out the pipeline is undesirable for performance, secondly the arguments for the condition itself may require memory to be read, thus potentially polluting the cache.
@mido09z
@mido09z 4 года назад
Great video and amazing channel. I just want to point out a small note at 41:57 which is n < iterations is not the same as iterations > n because of the case where n = iterations
@javidx9
@javidx9 4 года назад
This is a good point Mohamed - combined with the way the loop is structured now, I think this approach always does one further iteration compared with the reference function.
@achtsekundenfurz7876
@achtsekundenfurz7876 3 года назад
(1) n < iterations (2) iterations > n If n = iterations , both expressions are false, since both comparators exclude equality. They are in fact the same.
@qwedschy8285
@qwedschy8285 4 года назад
Spending my summer break learning more about coding, but what can I say, these videos are too good! Thank you.
@zubble7144
@zubble7144 4 года назад
It might be instructional to add the benefit of using intrinsics by showing a sid-by-side video of the fractal generations. IOW a "what is there to gain" for all your extra coding efforts. Well done, I have recommended this on IDZ
@javidx9
@javidx9 4 года назад
Hi Zubble and thanks - In principle this was a follow up to the previous video that did show the the difference with/without intrinsics, its just that one did not show the intrinsic code in detail.
@Drunkenkatana
@Drunkenkatana 4 года назад
Thanks for your videos! I love the way you explain things!
@duality4y
@duality4y 4 года назад
need more of this
@tmbarral664
@tmbarral664 3 года назад
Bow to you, Sir, for the quality of your explanation. I love how your mind works.
@danielkrajnik3817
@danielkrajnik3817 3 года назад
31:15 just a detail, but I think 'p' in '_mm256_mul_pd' stands for 'packed' not 'parallel'
@axelanderson2030
@axelanderson2030 2 года назад
What does epi stand for? I assume 'something packed integer'
@orbik_fin
@orbik_fin Год назад
@@axelanderson2030 "extended packed integer", for 128+ bit registers, because _pi* was already taken for MMX.
@axelanderson2030
@axelanderson2030 Год назад
@@orbik_fin thanks
@dorjderemnamsraijav5182
@dorjderemnamsraijav5182 4 года назад
Javidx9, my hero. Why? He read every single comment i wrote on this channel and im sure it applies to everyone else. If I become successful person one day, the reason must be your videos. They are very well made and he explains every single step he made on his videos. I cant help you with financial part right now, but I will make sure pay what you did to me in the future after I get some job. You are very cool man (I cant even describe it with word). And thinking about what you did for me make me so emotional.
@javidx9
@javidx9 4 года назад
lol, thank you Dode XD
@christophfriedrich5092
@christophfriedrich5092 4 года назад
Love your vids. Even if I don't understand them the first time I watch because I'm just a simple web developer (PHP, NodeJS) but the way you explain helps me to understand more of our computers and the way programs work (and I hope they make me a better programmer - even on simpler stuff ^^)
@alexkval
@alexkval 4 года назад
Thank you very much for such a detailed explanation 👍
@zrodger2296
@zrodger2296 Год назад
I think I found a really cool problem that could use intrinsics so I'm excited. A couple of other optimizations and I'm aiming to solve out to 1 million instead if grinding it out to 50 thousand or so. Great video!
@hl2mukkel
@hl2mukkel 4 года назад
Thank you so much for this video, I learned so much! You truly are a blessing for the C++ RU-vid community :-)
@rachelmaxwell4936
@rachelmaxwell4936 4 года назад
An excellent video! Thank you for taking the time to respond to user feedback. Appreciate the details about masks and how to use them to perform logical operations. I've beem learning x64 programming via "Beginning x64 Assembly Programming: From Novice to AVX Professional" by Jo Van Hoey and this is an incredible suppliment to the C/C++ side of things.
@Spikehead777
@Spikehead777 4 года назад
Intrinsics look scary. They're not as scary now that I've seen this video!
@mycotina6438
@mycotina6438 3 года назад
Loved it! Simple, easy to understand yet complete. Thank you!
@yuushabio4529
@yuushabio4529 4 года назад
Finally, a video on RU-vid i can relate to 😆
@JonnyRobbie
@JonnyRobbie 4 года назад
Jesus Christ, you've outdone yourself. But thank you, I like videos where I learn something new and this certainly exceeded that by a long shot.
@achtsekundenfurz7876
@achtsekundenfurz7876 3 года назад
_in be4_ "Jesus take the mousewheel"
@nishantraj8391
@nishantraj8391 4 года назад
Are you a wizard? I was trying to learn about this just recently, and then your video comes out. Thank You
@jordanclarke7283
@jordanclarke7283 4 года назад
Mind blown! 🤯 Excellent video!
@rhutajoshi9288
@rhutajoshi9288 2 года назад
This is so well explained!! Thank you!
@NolePTR
@NolePTR 4 года назад
I'd love more technical videos like this in the future. It's hard to get tutorials for this type of stuff.
@leepro
@leepro 2 года назад
_c missed in the video but I found it in the github repo. Thanks for the video!!!
@nonchip
@nonchip 4 года назад
i like how VS shows a small "
@dozafixusa
@dozafixusa 4 года назад
At 49:40, it is also possible to use _mm256_extract_epi64 to get simple types out of a register again, which would get rid of the ifdef Having done some intrinsic programming before, and i think that your video is an amazing ressource on how you programm with it's quirks in minds Well, all of your videos are an amazing ressource - keep up the good work! :)
@javidx9
@javidx9 4 года назад
Cheers buddy, The problem I find with intrinsics is there are so many functions, but Ive not found a sensible "high level" list of function catagories XD so thanks!
@motbus3
@motbus3 4 года назад
still the best c++ videos
@federicopanichi9874
@federicopanichi9874 2 года назад
nice, nice, nice !!!! More of those Hardcore videos. Pleeaaase :)
@mika2666
@mika2666 4 года назад
Definitely liked and subscribed for this one, already had assembly and all the bitwise and mask stuff in school but this really helped me with how to convert complicated things into intrinsics 😄
@cracksoftcond
@cracksoftcond 6 месяцев назад
wonderful thank you!
@Andrew90046zero
@Andrew90046zero 4 года назад
I think what there needs to be is a nice api that allowed you to "agnostically" use the SIMD extensions without needing to know which ones your cpu support. And the api will provide a way to manually leverage the registers in a more human-readable way without having to pay attention to choosing the right set for your cpu. It just generates the right intrinsics for your system. And you won't need to think about if the registers are 128, 256, or 512. The system will pack in the data automatically and its up to you to manually use it to process data in bulk.
@TOMMYMAJORS
@TOMMYMAJORS 2 года назад
incredible video, thank you
@ristopaasivirta9770
@ristopaasivirta9770 3 года назад
The way to outsmart the compiler is to become the compiler!
@GNARGNARHEAD
@GNARGNARHEAD 3 года назад
incredibly helpful, thanks :)
@189Blake
@189Blake 3 года назад
When you called the goto function, I was like: Wait a second, isn't this just assembly then? And indeed it was 😅
@miguel_franca
@miguel_franca 4 года назад
Loved it! Clear explanations, awesome video
@Wayne-wo1wc
@Wayne-wo1wc 4 года назад
Thank you Dave
@JanHorcicka
@JanHorcicka 4 года назад
Great video! Thank you very much.
4 года назад
You’re such a smart dude.
@47Mortuus
@47Mortuus 3 года назад
44:34 ++++ You don't need to use the comparison mask to select/blend between '0' and '1'. Since 'all ones' is the two's complement representation of '-1', you can simply subtract the mask from your iteration counter (x + 1 x - (-1) AND x + 0 x - 0). You could've explained the blend intrinsic with this code segment, going from where you were with your AND equivalent, but also showing off the trick I mentioned afterwards.
@markv559
@markv559 4 года назад
Premature optimization might be the root of all evil, but watching this video is pre-premature optimization and is very enjoyable!
@danielkrajnik3817
@danielkrajnik3817 3 года назад
This is brilliant
@99.googolplex.percent
@99.googolplex.percent 2 года назад
Thread is the subject that I'd like to hear from you.
@paulmoore7964
@paulmoore7964 4 года назад
one of the biggest issues today is that cpu % meters do not show stall time. SO you can have a horrifically inefficient data layout and be running at 5% cpu speed but the cpu meter will show 100%, I am amazed that there is still no way in perfmon, VS ,... to see the real cpu load. I did not realize how truly huge the impact was
@josedejesuslopezdiaz
@josedejesuslopezdiaz 4 года назад
thank u for your amazing content.
@ZOMGWTFALLNAMESTAKEN
@ZOMGWTFALLNAMESTAKEN 4 года назад
i know nothing about coding and have 0 experience, I do like these videos and hope they continue
@peterbonnema8913
@peterbonnema8913 4 года назад
Yes! This is great. More advanced topics please!!
@akhial
@akhial 4 года назад
Awesome! Thanks for this!
@Koldulok
@Koldulok 4 года назад
An ALU "It's the thong that does the stuff" - OLC 2020 (7:20) I love it
@achtsekundenfurz7876
@achtsekundenfurz7876 3 года назад
"thong" ? hahah Anyway, yeah that's a good quote; should be on a t-shirt: "It's the thing that does the stuff" - Programming Bible, Javi 7:20
@eopXD
@eopXD 2 года назад
Thank you for the video.
@danbopes6699
@danbopes6699 4 года назад
One big thing I noticed was _c was not actually set on in the video, but was initialized in github. This confused me greatly: _c = _mm256_and_si256(_one, _mask2);
@MrHatoi
@MrHatoi 4 года назад
The function refers to the logical and operation. _mask2 contains numbers which are either all 1s or all 0s. If you and something with all 1s, it doesn't change its value, while if you and something with all 0s, it returns 0. _one contains four constant 1s. This line stores in c the result of logical anding each value in _mask2 with 1. Essentially, this line converts each value in the mask from being either 64 ones or 64 zeroes to just one or zero.
@АркадійЦиганов
@АркадійЦиганов 3 года назад
thanks! would be cool if u compared the execution time with Intrinsic and without
@atrumluminarium
@atrumluminarium 4 года назад
Yes! Thank you for the video
@philtoa334
@philtoa334 4 года назад
So nice.
@MrNathanShow
@MrNathanShow 4 года назад
Ty!
Далее
Brute Force Processing
52:50
Просмотров 156 тыс.
Back To Basics: C++ Containers
31:41
Просмотров 184 тыс.
PERFECT PITCH FILTER.. (CR7 EDITION) 🙈😅
00:21
Просмотров 3,4 млн
The Magic of RISC-V Vector Processing
16:56
Просмотров 309 тыс.
What Are Pointers? (C++)
41:55
Просмотров 563 тыс.
What is the Smallest Possible .EXE?
17:04
Просмотров 402 тыс.
Practical Polymorphism C++
41:44
Просмотров 125 тыс.
When you Accidentally Compromise every CPU on Earth
15:59
AVX512 (1 of 3): Introduction and Overview
17:35
Просмотров 39 тыс.
Demystifying the C++ Compiler!
12:52
Просмотров 18 тыс.
Fast Inverse Square Root - A Quake III Algorithm
20:08
iOS 18 в реальной жизни
14:42
Просмотров 184 тыс.
iPhone 16
0:20
Просмотров 13 млн