Тёмный

Writing Code That Runs FAST on a GPU 

Low Level Learning
Подписаться 558 тыс.
Просмотров 535 тыс.
50% 1

In this video, we talk about how why GPU's are better suited for parallelized tasks. We go into how a GPU is better than a CPU at certain tasks. Finally, we setup the NVIDIA CUDA programming packages to use the CUDA API in Visual Studio.
GPUs are a great platform to executed code that can take advantage of hyper parallelization. For example, in this video we show the difference between adding vectors on a CPU versus adding vectors on a GPU. By taking advantage of the CUDA parallelization framework, we can do mass addition in parallel.
🏫 COURSES 🏫 Check out my new courses at lowlevel.academy
🙌 SUPPORT THE CHANNEL 🙌 Become a Low Level Associate and support the channel at / lowlevellearning

Опубликовано:

 

9 июл 2021

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 308   
@empireempire3545
@empireempire3545 Год назад
You could make a series out of this - basics of CUDA are trivial, but there are many, many performance traps in gpgpu
@VulpeculaJoy
@VulpeculaJoy Год назад
Especially once you get into cuBLAS and Thust teritory, things get complicated really quickly.
@Freakinkat
@Freakinkat Год назад
​@@VulpeculaJoyYour not joking! Me: "Throws hands in the air in frustration."
@andrebrait
@andrebrait Год назад
Back when I tried GPGPU, the most astonishing performance trap was just memory handling. Selecting what data to put into what kind of memory and utilizing them was very hard, but when you did it right the thing performed 10x better.
@deadvoicegame
@deadvoicegame 4 дня назад
@@andrebrait please i need some help with that, if you can help a bit a nd guide me this will be much appreciated 👍🙏
@shanebenning3846
@shanebenning3846 2 года назад
This was super insightful, never would have thought it'd be that easy... I need to look more into cuda programming now
@cedricvillani8502
@cedricvillani8502 2 года назад
It’s definitely not but by now you have realized that 😮😅
@dominikkruk5235
@dominikkruk5235 2 года назад
Finnaly i can use my rtx 3060 Ti to do something useful...
@ojoaoprocopio
@ojoaoprocopio 11 месяцев назад
Nice, now you can bubble sort a array
@FuzeEdits
@FuzeEdits 11 месяцев назад
Bogo sort
@vuhuy8952
@vuhuy8952 11 месяцев назад
bogo sort look like gamble with fate.
@hikari1690
@hikari1690 6 месяцев назад
I use my gpu to play league
@LeicaM11
@LeicaM11 4 месяца назад
I do love grid computing and parallelism. I really want to learn how to program may new eGPU (RTX 3080).
@peterbulyaki
@peterbulyaki Год назад
Excellent tutorial. One minor thing I would have mentioned in your video is that copying between device and host or host and device is a relatively expensive operation since you are moving data from/to the CPU to/from the GPU through the pci express bus which no matter how fast or modern system you have is still a bottleneck compared to data transfer between CPU and memory or GPU and dram. So the performance advantage is only noticeable when the duration of data copying is relatively short compared to the task execution time.
@MrSofazocker
@MrSofazocker Год назад
Hm..yes, but only if your data is of significant size aswell. Also the Bus speed is fixed by platform. Only a concern if your gpu is significantly faster than it fetches new data. Otherwise yes agree. You always have to test everything. Best example is UnrealEngine5, where after testing it turns out software rasterizing is faster than doing it on the gpu for some reason 😂 Always test if what you do would actually benefit from switching the compute device and dealing with copying data, etc.
@T33K3SS3LCH3N
@T33K3SS3LCH3N 11 месяцев назад
Yeah that part hit me in the face when I was writing a 3D engine. Starting at a few hundred to thousand objects it is not so much the complexity of shading each object, but the number of seperate draw calls to the GPU that slows things down to a crawl. In that case it is the latency of communication between the CPU and GPU, rather than the bandwidth, that causes problems, but the fundamental issue is the same: Sending data between the two is slow. I had found this cool technique that would speed up deferred shading a lot more by doing additional checks for what area would actually be hit by light sources. The problem with this was that it ment 2 draw calls per light source instead of 1. Even though this saved the GPU itself a lot of work, it ended up dramatically decreasing performance since it were the draw calls that bottlenecked me. For the mentioned scenario, the proper solution are batch calls where a single call to the GPU can render many objects at once (particularly identical ones that use the same shader and ideally the same base mesh).
@peterbulyaki
@peterbulyaki 9 месяцев назад
The more vram you have the larger training datasets you can use. For certain tasks cards with low vram are perfectly usable, for others not.
@LuLeBe
@LuLeBe 6 месяцев назад
@@gonda8365 sometimes low vram also just doesn’t work at all. Like blender cuda rendering, if the scene doesn’t fit in vram, it won’t render, not even in a million hours.
@perinoid
@perinoid 6 месяцев назад
I totally agree. I wanted to write a very similar remark, just noticed yours.
@Borszczuk
@Borszczuk Год назад
This fight at @7:30 with "*" placement was hilarious. I laughed so hard when you gave up :)
@BenjaminWheeler0510
@BenjaminWheeler0510 Год назад
As someone who doesn’t have nVidia, you should do an OpenCL or OpenGL series, which everyone can use! Unless there’s something special about cuda, I never see the cross platform ones on RU-vid…
@cykkm
@cykkm Год назад
Look at the Intel's oneAPI Base Toolkit, which includes a dpc++ sycl compiler. It may hide all this low level stuff, which is too hard to do efficiently. By default it works the best, Intel being Intel, with OpenCL (3.0 sure, not so much about 2.2; doesn't with 2.1), but there is already experimental support for CUDA out of the box. sycl is an open, GPU-agnostic (ahem, supposed to be) standard. CUDA code looks like C++, but in fact you think about hardware all the time, it's harder than assembly, in fact. OpenCL is no simpler. Looks are deceptive. This is why I believe a good compiler eventually beat low-level CUDA/OpenCL coding. Who would hand-optimize Intel CPU code these days, and beat the optimizer? High-level distributed/parallel C++ (DPC++) is da way to look into future. BTW, OpenCL is for compute, OpenGL is for 3D drawing/rendering, it's not "or." Entirely different APIs. OpenCL takes the same task as CUDA. OpenGL is xplat and oldsy (ah, that 80's feel!); for Windows-only, DirectX is preferable. If you take the oneAPI route, one piece of advice is to chose components to install. The full thing takes 40GB installed or so, and takes awful time to install and upgrade, even on a second-high-end Gen12 CPU and very fast PCIe SSD. And you hardly need data analytics or video compression libraries.
@LRTOTAL
@LRTOTAL Год назад
Compute shaders! They run on all gpus.
@ben_jammin242
@ben_jammin242 Год назад
Is vulkan worth learning?
@PutsOnSneakers
@PutsOnSneakers Год назад
@@ben_jammin242 I was just about to mention that vulkan is the future while openGL lags far behind in terms of being adopted by the masses
@whannabi
@whannabi Год назад
@@PutsOnSneakers cadum, cadum
@lucasgasparino6141
@lucasgasparino6141 Год назад
Amazing intro to CUDA man! For those interested in gpu programming, I'd also recommend learning openACC. Not as powerful as CUDA, but gives you a nice "first working" gpu program to have an idea before suffering with low level optimization hehe. Would be nice to see a follow up to this using both MPI and CUDA to work with multiple GPUs :D
@rezq2883
@rezq2883 11 месяцев назад
Amazing video! I love the way you explain things thoroughly enough that a beginner can easily understand it without explaining *too* much and droning on. Thorough yet concise, great job :)
@LowLevelLearning
@LowLevelLearning 11 месяцев назад
You're very welcome!
@Antagon666
@Antagon666 2 года назад
No dislikes, no wonder why :) I finally found a comprehensive tutorial, because most of them fail to explain the basic mindset behind CUDA programming.
@widrolo
@widrolo 2 года назад
there are 5 now, propably people who didnt like him personally or trolls...
@nomadshiba
@nomadshiba 2 года назад
@@widrolo or people who dont like multi threading for some weird reason or people maybe who know some different framework for this and get annoyed he showed this one idk it can be anything
@balern4
@balern4 Год назад
You can see dislikes?
@SGIMartin
@SGIMartin Год назад
@@widrolo Its AMD engineers
@NuLuumo
@NuLuumo 7 дней назад
@@balern4 There's an extension on the chrome web store that adds them back
@0xggbrnr
@0xggbrnr 11 месяцев назад
This is a lot more straightforward than I thought it would be. Basically, replace all allocation operations and pointer operations with CUDA framework types and functions. 😅
@mrmplatt
@mrmplatt 9 месяцев назад
This was a super cool video. I'm currently learning assembly so seeing how to operate at a pretty low level was very interesting to me.
@bogdandumitrescu8987
@bogdandumitrescu8987 2 года назад
Useful, but the discussion about the block size and grid size was avoided. I think there should be a video focused only on this topic as it's not easy to digest, especially for new CUDA programmers. A comparison with OpenCL would be even better :)
@JM-fo3yb
@JM-fo3yb 2 года назад
Keep up the good content boss !
@LowLevelLearning
@LowLevelLearning 2 года назад
@JM thank you very much sir! Will do! :D
@herrxerex8484
@herrxerex8484 2 года назад
i discovered your channel recently and so far I am loving it.
@LowLevelLearning
@LowLevelLearning 2 года назад
Glad you enjoy it!
@Shamysoza92
@Shamysoza92 2 года назад
You channel is amazing! Just found it and I must tell you have a great way of teaching. Kudos for that congrats on the amazing content
@LowLevelLearning
@LowLevelLearning 2 года назад
Thanks so much!
@psevekar
@psevekar 2 года назад
You explained it so well, thanks a lot
@thorasmund
@thorasmund Год назад
Great video! Short and to the point, just enought to get me started!
@ramezanifard
@ramezanifard Год назад
Very nice tutorial. I really liked it. It's brief, to the point and very clear. Thanks. Could you please make a video for the same example but in Linux?
@a1nd23
@a1nd23 Год назад
Good video. It would be interesting to make the vectors huge and run some benchmarks comparing the cuda function to the cpu version.
@alzeNL
@alzeNL 6 месяцев назад
i think armed with this video, its something you could do yourself :) the best students are the ones that use what was taught.
@Ellefsen97
@Ellefsen97 6 месяцев назад
I would imagine the CPU implementation would win performance wise when it comes to simple addition, since copying memory to and from the GPU is a pretty expensive operation. Especially if we make the benchmarking fair and utilize threads on the CPU implementation.
@Frost_Byte_Tech
@Frost_Byte_Tech 9 месяцев назад
I'd really love to see more videos like these
@rampage_sl
@rampage_sl Год назад
Hey this is super useful! I elected High Performance Computing and Microprocessors and Embedded Systems modules for my degree, and this channel has become my go-to guide.
@MCgranat999
@MCgranat999 Год назад
That's probably the degree I'm gonna go for as well. This channel is amazing xP
@johnhajdu4276
@johnhajdu4276 Год назад
Thank you for the Video, it was good to see a easy example how it works. I was watching recently a video about the MMX instruction set of the first Pentium CPU (around 1997), and it was mentioned that the main usage of that new feature was for example changing the brightness of a photo (probably bitmap file) where a lot of mathematical manipulation needed on a huge file, and the mathematical functions is repeating for every pixels. The idea behind MMX was, that multiple registers was loading with values, and then cpu executed one instruction and some clock cycles later all the output registers were ready filled. I think it was called "single instructon multiple data". I have this feeling now, that the GPU Cuda core could do all the mathematical manipulation with a bitmap picture, we only have to load the picture in the GPU memory, and the mathematical manipulation pattern(s) with it, and execute the transformation. Probably it does not worth transform only one picture, with all the preparation we lose time, but if we have many different pictures (for example a video), maybe it makes sense to use the power of the GPU.
@tormodhag6824
@tormodhag6824 Год назад
I dont know if im correct but when you render video in blender for example it can use gpu, and you can do Things like mainpulating Color. Dont know if it has any relevance just my thoughts only
@LuLeBe
@LuLeBe 6 месяцев назад
Yeah photoshop does a few things on the gpu, and good video editing algorithms run on the gpu as well. It’s exactly like you said. And SIMD instructions are also used quite a lot, but from what I’ve seen, they seem more of a middle ground. If CPU is too slow, but gpu not really worth it due to latency or complexity.
@simodefa12
@simodefa12 5 месяцев назад
I guess it might be similar to the SIMD instructions on arm cortex. Basically there's a coprocessor dedicated to executing instructions that operate on multiple registers at the same time.
@dominikschroder3784
@dominikschroder3784 Месяц назад
Great explanation!
@iyadahmed3773
@iyadahmed3773 Год назад
Thanks a ton, very clear explaination 🙏
@illosuth
@illosuth 3 месяца назад
Pretty straight forward tutorial. What do you think would be the next step? vector multiplication?
@murdomeiring2934
@murdomeiring2934 Год назад
@LowLevelLearning Could be very cool to see a bit more complex & lengthy setup to show difference in time on GPU vs CPU for different use cases.
@bluustreak6578
@bluustreak6578 Год назад
Super nice starting video for someone like me who was too afraid to try it blind :D
@Dedi369
@Dedi369 2 месяца назад
Super interesting! Thanks
@GeorgesChannel
@GeorgesChannel Год назад
Very helpfull. Thank you for sharing!
@WistrelChianti
@WistrelChianti Год назад
Thanks, that was a super clear example. Amused that you called it a register, guess you can't turn off thinking in assembly code :D
@zrodger2296
@zrodger2296 2 года назад
Easier than I thought! Would love to see you do this in OpenCL!
@LowLevelLearning
@LowLevelLearning 2 года назад
Great suggestion!
@NoorquackerInd
@NoorquackerInd 2 года назад
@@LowLevelLearning Yes, definitely give OpenCL content, there's not enough of it
@hstrinzel
@hstrinzel Год назад
That is VERY impressive how relatively SIMPLE and CLEAR you showed that! Wow, thank you! Question: There is SOME sort of parallel or vector operation also possible on the modern CPUs, right? Could you show how THAT would be done in this example?
@miketony2069
@miketony2069 Год назад
That was an excellent beginner friendly overview. Almost a hello world type of intro to get your feet wet. Definitely looking forward to more videos from you.
@bean_mhm
@bean_mhm Год назад
Super interesting, thanks a lot!
@Rottingflare
@Rottingflare 2 года назад
Loved the video! Had to like and subscribe! Can't wait to see the rest of the project as well as what other projects you work on!
@d_shepperd
@d_shepperd Год назад
Thanks. Nicely done.
@ankk98
@ankk98 5 месяцев назад
This was insightful
@MrHaggyy
@MrHaggyy Год назад
This video was great never thought it would be so simple. Do you mind digging deeper into this. Maybe some filter, coordinate transformation or other basic math stuff?
@typeer
@typeer 2 года назад
Channel is just the sickest ty ty
@cykkm
@cykkm Год назад
Cool intro, thanks! In the year 2021, tho, I'd rather use even simpler modern cudaMallocManaged() UVM call. One may get faster code by manually controlling memory transfers in multiple streams and synchronization; this is what I have seen in code written by an NVIDIA Sr. SWE, but could never really fully grok it. For the rest of us, there's UVM-you just allocate memory accessible to both the CPU and the device, and it's synchronized and moved in the right direction at the driver level. It does allow writing stupidly inefficient code, but this is not too easy, really :) For a GPU starter, it simplifies memory tracking a lot.
@olivalle
@olivalle 2 года назад
Thank you for your cristal clear explanation
@LowLevelLearning
@LowLevelLearning 2 года назад
You are welcome!
@arbiter7234
@arbiter7234 8 месяцев назад
thanks a lot, great tutorial
@lohphat
@lohphat 11 месяцев назад
Are there any guides explaining how the code segments are actually sent to the GPU and how the API and firmware handle operations? Just understanding the coding portion isn't enough until you understand the hardware architecture and low-level ops.
@nathanaelmccooeye3204
@nathanaelmccooeye3204 Год назад
Thanks for the video! CC: When the narrator follows new, or not immediately obvious to a newcomer information with, “right?” I feel really lost and a little stressed thinking I can’t even understand this basic information!!
@nefasto_
@nefasto_ Год назад
i like the fact that u write in c that i do at school and i understand what you are coding
@gat0tsu
@gat0tsu 11 месяцев назад
thanks alot for the video
@philtoa334
@philtoa334 Год назад
Very good thanks.
@gabrielgraf2521
@gabrielgraf2521 Год назад
Damn was this interesting. So basically everything time I have big for loops or even nested for loops, my graphics card could calculate it way faster. Thanks man this was interesting
@jarsal_firahel
@jarsal_firahel 11 месяцев назад
Absolutely awesome
@KogDrum
@KogDrum 2 года назад
Thanks, can you recommend resources for learning this specific type of programming? or from where to get this kind of knowledge?
@EnderMega
@EnderMega 2 года назад
Man, the Nvidea dos are ok, but this is si well made, very nice :D
@skylo706
@skylo706 6 месяцев назад
Maybe you don't read this because the video is 2 years old now, but could you make a video about how graphics programming works on a computer? 2d and or 3d. You are so good at explaining stuff, it would be really amazing imo
@mutt8553
@mutt8553 11 месяцев назад
Great video, really interesting stuff. Looks like I need an Nvidia gpu now
@danielniedzwiecki638
@danielniedzwiecki638 Год назад
thnx you are a legend brothers
@gregoryfenn1462
@gregoryfenn1462 11 месяцев назад
Thanks for this! As this channel is Low Level Programming, can we look into making our own GPU driver code (the GPU malloc and parallel function call interface)? Jist calling cuda APIs is really high level programming with all the technical detaileds abstracted away.
@PhiloMusix24
@PhiloMusix24 4 месяца назад
This is absolutely mental 😎
@UnrealOG137
@UnrealOG137 Год назад
Never expected to hear that ending song. It's a really good song. It's Run by Hectorino Martinez.
@ben_jammin242
@ben_jammin242 Год назад
How can you dynamically manage and display your available GPU memory based on load and display it as a bar graph? Such as when you're choosing LoD or texture and geometry complexity and want to estimate if it's going to throttle the gpu. Many thanks! Happy to be pointed to a resource if it's not something you've covered as yet :-)
@_RMSG_
@_RMSG_ Год назад
I believe the Nsight debugging tools should give you everything you need for this
@yah3136
@yah3136 2 года назад
Nice presentation, but you should speak about OpenCL, even if it's not well supported on Nvidia card, at least you can target multiple parallel devices (at the same time). Andthe core conept of grid, block and threads are quite the same (with different name, but same cache segregation principle).
@jan-lukas
@jan-lukas 2 года назад
Yeah OpenCL is the way to go for using several gpus or different types of gpus (like Nvidia and amd)
@user-oh4wd9vk4s
@user-oh4wd9vk4s 10 месяцев назад
love this
@jonathanmoore5619
@jonathanmoore5619 2 года назад
Super! Mark Duper!
@50Kvful
@50Kvful 8 месяцев назад
Inspiring
@unrealpaulo7857
@unrealpaulo7857 Год назад
thanks !
@Yupppi
@Yupppi 7 месяцев назад
What does the size of array/size of int do? I've seen it in C++ demonstration of array being referenced to as a whole pointer instead of slot per slot.
@muhammedthahirm423
@muhammedthahirm423 9 месяцев назад
Bro actually Showed both result that came in ~1nanoSecond and 0.3nanoSecond and thought we would notice. jk Your Explanation is Amazing
@ronensuperexplainer
@ronensuperexplainer Год назад
After writing 400 LOC for initializing OpenCL and finally giving up, this seems so easy!
@ben_jammin242
@ben_jammin242 Год назад
New to your channel. Liked and subbed! Edit: what is "sizeof(a)/sizeof(int)" computing? I thought size of c would be N if a and b are both N
@carljacobs1287
@carljacobs1287 Год назад
sizeof(a) will return the size of the array in bytes. sizeof(int) will return the size of an int in bytes (which might be 2, 4 or 8 depending on whether you're on 16 bit, 32 bit or 64 bit hardware). The division then gives you the number of elements in the array. A useful helper #define is: #define ArrayLength(a) (sizeof(a)/(sizeof(a[0])) This will also work if you have an array of structures.
@godknowsgatsi9246
@godknowsgatsi9246 11 месяцев назад
An amazing tutorial but ,, 1 question,, after running does it free the allocated memory itself or we free afterwards?
@tansakdevilhunter9462
@tansakdevilhunter9462 Год назад
Is there any difference between compute shader and this cuda programming?
@kaiperdaens7670
@kaiperdaens7670 5 месяцев назад
The vector thing in the beginning could be done multicore too I think so with 3 vectors you can just do each one on a different core at yhe same time.
@AlessandroContrino
@AlessandroContrino Год назад
Thanks
@ProjectPhysX
@ProjectPhysX 2 года назад
Very helpful tutorial! I prefer OpenCL though :)
@cedricvillani8502
@cedricvillani8502 2 года назад
A Khronos Group junkie huh? Well at least OpenCL has a SDK and not just a API . Most people don’t bare metal program, scripture’s if they get near this usually use Python
@thoufeekbaber8597
@thoufeekbaber8597 Год назад
i like the intro "we are mining bit coin"
@petrosros
@petrosros 8 месяцев назад
What is the variance between the two in terms of accuracy, without stressing the system, and time not being a factor? Are results different?
@kreinraan6558
@kreinraan6558 Год назад
Nice video! Sorry if I missed it, but is there a reason why you did not use std::vector as arrays?
@GregMoress
@GregMoress 10 месяцев назад
Probably because the hardware doesn't use it.
@AvgDan
@AvgDan Год назад
Do you think you could do a video of using GPU to solve for subset sum?
@phrygianphreak4428
@phrygianphreak4428 4 месяца назад
Me: "but how do I stop doing all this low level memory management like it's 1967?" GPU: "That's the neat part. You don't"
@dj-maxus
@dj-maxus Год назад
After discovering your channel and this video, I became interested in how GPU programming in Rust looks like. May I wonder if there's any chance such a video appears?
@ragtop63
@ragtop63 4 месяца назад
I wonder if the CUDA framework is available for use in C#? I don't know C++ and I really don't want to spend years learning how to "properly" create C++ apps.
@jesseparrish1993
@jesseparrish1993 Год назад
I was busy trying to build a GPU on a breadboard like a weirdo when I found this. Much better.
@altayakkus4611
@altayakkus4611 Год назад
Building a GPU on a breadboard is really cool, why should this video be better, its just a different topic. Or were you trying to build an ASIC on a breadboard, and realized now that you can just use CUDA? ;D
@bimDe2024
@bimDe2024 Год назад
are u going to make a series on cuda?
@Adrian.Rengle
@Adrian.Rengle Год назад
Hi ! A classical C++ question, zero knowledge of GPU programming ! After cudaMalloc, shouldn't there be a sort of cudaFree ? What happens with the GPU memory ? Thank you for the comments and for the video !
@SauvikRoy
@SauvikRoy Год назад
+1 for using light theme for demonstration! Nice tutorial.
@donutwarior200
@donutwarior200 6 месяцев назад
goated opening
@Rejnols
@Rejnols 2 года назад
Well well well, i see you changed the thumbnail 😎
@holthuizenoemoet591
@holthuizenoemoet591 Год назад
Could you also do the same video for ROCM?
@anteconfig5391
@anteconfig5391 Год назад
Is there any openCL documentation as comprehensive as this? I need to use my crappy integrated/on-board GPU for the time being
@luminus3786
@luminus3786 2 года назад
How do i see the veiw that he sees when checking the results of the first lines of code?
@kyuthefox
@kyuthefox Год назад
I like Cuda but Considering how many cuda tutorials there are I would like an OpenCL Tutorial because there are only like really advanced examples out there and you have to start with the basics. Which on RU-vid I couldn't find
@jooseptavits9456
@jooseptavits9456 11 месяцев назад
Under what conditions would you use more than one grid?
@emty5526
@emty5526 Год назад
The documentation says a call to a __global__ function is asynchronous. What happens if the vectorAdd function already returned and we are trying to cudaMemcpy before the GPU has finished the operation?
@Keldor314
@Keldor314 Год назад
IIRC, it implicitly synchronizes at the cudaMemcpy. I believe you have to use the Cuda stream API or the task graph API if you want asynchronous memory and compute.
@ljuberzy
@ljuberzy Год назад
nice! but how do I do that in macos/linux environment? nvidia doesn't have that package for macos.
@testuser6429
@testuser6429 6 месяцев назад
I was waiting for you to add the timimg APIs and bench mark the code for CPU and GPU runtimes 😢😢 but great tutorial anyway
@198-rx
@198-rx 11 месяцев назад
Powerfull
@peter.b
@peter.b 4 месяца назад
When you write “sizeof(int)” what is this referencing? I’m not at all familiar with this language, but I am assuming that’s references something that’s been determined previously?
@1kvolt1978
@1kvolt1978 14 дней назад
Compiler changes it to the actual value when compiles. The value in this case is the size in bytes of an int type, which is one of basic built-in types in C.
@theorogalski3799
@theorogalski3799 Месяц назад
Awesone
@Freakinkat
@Freakinkat Год назад
The clicky keyboard sounds was oddly satisfying to me. It's like a little white noise to me. It's so peaceful
@maximinmaster7511
@maximinmaster7511 2 года назад
Hello, thank you for this video. Question : what is the limit of thread in block 1 ?
@zilog1
@zilog1 Год назад
Just got a Quadro p6000 for a steal on ebay 25GB of vram lets goooooooo
@aaronstone628
@aaronstone628 11 месяцев назад
Could you do the same with Stream Processors API with Radeon?
Далее
I need your help..
00:28
Просмотров 4,5 млн
КИТАЕЦ ЗА 24 МИЛЛИОНА / РАЗГОН
1:10:06
CPU vs GPU vs TPU vs DPU vs QPU
8:25
Просмотров 1,5 млн
comparing GPUs to CPUs isn't fair
6:30
Просмотров 282 тыс.
I built my own graphics card
15:34
Просмотров 1,4 млн
Fast Inverse Square Root - A Quake III Algorithm
20:08
NVIDIA REFUSED To Send Us This - NVIDIA A100
23:46
Просмотров 10 млн
The Flaws of Inheritance
10:01
Просмотров 883 тыс.
Naming Things in Code
7:25
Просмотров 1,9 млн
How do Video Game Graphics Work?
21:00
Просмотров 3,2 млн