Тёмный

CPU vs GPU | Simply Explained 

TechPrep
Подписаться 15 тыс.
Просмотров 138 тыс.
50% 1

Опубликовано:

 

22 окт 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 125   
@AungBaw
@AungBaw Месяц назад
Simple yet short to the point, instant sub thanks mate.
@TechPrepYT
@TechPrepYT Месяц назад
That was the goal glad it was helpful!
@rajiiv00
@rajiiv00 Месяц назад
Can you make a similar video for GPU vs Integrated GPU? Is there any difference in their architectures?
@RationalistRebel
@RationalistRebel Месяц назад
The main differences are the number of cores, processor speed, available power, and memory. Integrated GPUs are part of another system component, usually the CPU nowadays. That limits the number of cores the GPU part can have, as well as their speed and power. They also have to use part of the system memory for its own tasks. Nonetheless, they're more energy efficient and more than powerful enough for most common tasks. Discrete GPUs have their own dedicated processor and high-speed memory. Higher end GPUs typically require more power, sometimes even more than the rest of the system.
@mohd5rose
@mohd5rose Месяц назад
Dont forget compute unit.
@HorizonOfHope
@HorizonOfHope Месяц назад
@@RationalistRebelThis is well explained. Also, as you scale up any processor, there are diminishing efficiency gains. GPUs are often scaled up so much that they produce huge amounts of heat. Often you need more cooling capacity for the GPU than the rest of the system put together.
@RationalistRebel
@RationalistRebel Месяц назад
@@HorizonOfHope Yep, power consumption == heat production.
@mauriciofreitas3384
@mauriciofreitas3384 Месяц назад
An important thing to pay attention when scaling dedicated GPUs is your power supply. They consume more power, but that power has to come from somewhere. Never neglect your power supply when upgrading
@kimdunphy2009
@kimdunphy2009 Месяц назад
finally an explanation that I completely understand and not trying to sell me on anything!
@anotherfpsplayer
@anotherfpsplayer Месяц назад
Easiest explanation I've ever heard... can't be a more simpler explanation than this.. Brilliant stuff..
@TechPrepYT
@TechPrepYT 26 дней назад
Thank you! That's the goal!
@samoerai6807
@samoerai6807 Месяц назад
Briljant video! I started my IT Forensics study last week and will share this with the other students in my class!
@TechPrepYT
@TechPrepYT Месяц назад
Glad it was helpful!
@Zeqerak
@Zeqerak Месяц назад
Beautifully done. The best explanation I came across. Understood the core concepts you explained. Again, beautifully executed
@TechPrepYT
@TechPrepYT Месяц назад
Thanks for the kind words!
@AyoHues
@AyoHues Месяц назад
A good follow up video would be a similar short summary explainer on SoCs. And maybe one on the difference between the other key components like Media Engines, NPUs, onboard vs separate RAM etc. Thanks. 🙏🏽
@TechPrepYT
@TechPrepYT Месяц назад
Great idea, I'll put it on the list!
@markonar140
@markonar140 21 день назад
Thanks for this Great Explanation!!! 👍😁
@TechPrepYT
@TechPrepYT 16 дней назад
Glad you found it helpful!
@technicallyme
@technicallyme Месяц назад
Gpus also have a high tolerance for memory latency vs cpu Modern cpus have some parallelism built in. Functions such as branch prediction and out of order operations are common in almost all cpus
@simonpires6184
@simonpires6184 28 дней назад
Straight to the point and explained perfectly 👍🏽
@TechPrepYT
@TechPrepYT 26 дней назад
That was the goal thank you!
@gibiks7036
@gibiks7036 Месяц назад
Thank you... Simple and short....
@TechPrepYT
@TechPrepYT Месяц назад
Thank you!
@atleast2minutes916
@atleast2minutes916 Месяц назад
Thank you so much! Simple , brief and easy to understand!! Awesome
@TechPrepYT
@TechPrepYT 26 дней назад
Glad you enjoyed!
@olliehopnoodle4628
@olliehopnoodle4628 20 дней назад
Excellent and well put together. Thank you.
@TechPrepYT
@TechPrepYT 16 дней назад
Glad you liked it!
@kartikpodugu
@kartikpodugu Месяц назад
With the dawn of AI PCs, which always have CPU, GPU and NPU. Can you make a similar video on differences between GPU and NPU ?
@TechPrepYT
@TechPrepYT Месяц назад
Its' on the list!
@lodgechant
@lodgechant Месяц назад
Very clear and helpful - thanks!
@TechPrepYT
@TechPrepYT 26 дней назад
Thank you!!
@dinhomhm
@dinhomhm 5 месяцев назад
Very clear, thank you I subscribed to your channel to see more videos like this.
@TechPrepYT
@TechPrepYT 4 месяца назад
Thank you!!
@JimStanfield-zo2pz
@JimStanfield-zo2pz 3 месяца назад
Very powerful and concise explanation. Keep up the good work.
@TechPrepYT
@TechPrepYT Месяц назад
Thank you!!
@Soupie62
@Soupie62 Месяц назад
As an example... consider Pac Man. A grid of pixels is a Sprite. Pacman has 2 basic Sprite images: mouth open, and mouth closed. You need 4 copies of mouth open, for up/down/left/right, 5 Sprites total. Movement is created by deleting the old Sprite, then copying ONE of these from memory to some screen location. A source, a target, and a simple data copy, gives you animation. That's what the GPU does.
@davidwuhrer6704
@davidwuhrer6704 24 дня назад
Not entirely correct. Some architectures can handle sprites, some don't. Some handle matrix muliplication and/or rotozooming, some don't. What you described is what an IBM PC with MS-DOS does. No sprite handling, no rotozoom, no framebuffer. So you need four copies of the open mouth sprite, but you can generate them at run-time. And you need to delete and redraw the sprite with every frame. Other systems, and I mean practically every other system, makes that easier. You only need one open mouth sprite, and you can rotate it as needed, and you don't need to delete it if the sprite is handled, you just change its position. This is simple if the image is redrawn with each vertical scan interrupt. But that was before GPUs. Modern GPUs use frame buffers. You have the choice of blanking the buffer to a chosen colour and then redrawing the scene with each iteration, or just drawing over it. The latter may be more efficient: 3D animation often draws environments, and 2D often uses sprite-based graphics, two cases where everthing gets painted over anyway. And yes, for sprites that means copying the sprite pattern to the buffer, in a way that makes rotozooming trivial, so you don't need copies for different sizes and orientations either. The mouse pointer is usually handled as a hardware sprite, which means it gets drawn each frame and is not part of the frame buffer. What you also neglected to mention are colour palettes. There are, in essence, four ways of handling colour: True or indexed, with or without alpha blending. True colour just means using the entire colour space, typically RGB, which is usually 24 bits, that is 8 bits for each colour channel. 32 bits if you use alpha blending, too. This has implications for how a sprite is stored. If you use a colour palette, you can either use a different sprite for each colour, or store the colour information with the sprite. Usually you would use the former, because it makes combining sprites and reusing them with different colours easier. With the latter, you can get animation effects simply by rotating the colour palette. If you use true colour, you can use a different sprite for each colour channel, but you typically wouldn't, especially if you use alpha blending.
@waynestewart1919
@waynestewart1919 Месяц назад
Very good. I am subscribing. Thank you.
@TechPrepYT
@TechPrepYT 26 дней назад
Thanks for the sub!
@waynestewart1919
@waynestewart1919 26 дней назад
You are very welcome. You more than earned it. That may be the best explanation on CPUs and GPUs on RU-vid. Please keep it up.
@StopWhining491
@StopWhining491 Месяц назад
Excellent explanation. Thanks!.
@TechPrepYT
@TechPrepYT Месяц назад
Thank you!
@VashdyTV
@VashdyTV 5 месяцев назад
Beautifully explained. Thank you
@chandru_
@chandru_ 22 дня назад
nice explanation
@TechPrepYT
@TechPrepYT 16 дней назад
Thanks!!
@gfmarshall
@gfmarshall 29 дней назад
Thank you so much 🤯❤️
@TechPrepYT
@TechPrepYT 26 дней назад
You're welcome!
@natcuber
@natcuber Месяц назад
How much is the latency difference between a CPU and GPU, since it's stated that CPUs focus on better latency vs throughput?
@mohamadalkavmi4932
@mohamadalkavmi4932 Месяц назад
very simple and nice
@TechPrepYT
@TechPrepYT 26 дней назад
Thank you 😊
@PhillyHank
@PhillyHank Месяц назад
Excellent!
@TechPrepYT
@TechPrepYT Месяц назад
Thank you!
@maquestiauartandmore
@maquestiauartandmore 26 дней назад
great! thank you, but if you could slow down a little in your explanation..😅
@TechPrepYT
@TechPrepYT 16 дней назад
Yep will try!!
@buckyzona
@buckyzona 4 месяца назад
Great!
@abhay626
@abhay626 3 месяца назад
helpful. thank you!
@TechPrepYT
@TechPrepYT 2 месяца назад
Thanks!
@samychihi6317
@samychihi6317 Месяц назад
which means INTL fall is not that bad, if GPU can't fully replace CPU, then INTL will remain in the market of computing and personal PC. thanks for the explanation
@maxmuster7003
@maxmuster7003 Месяц назад
Intel Core2 architecture can execute up to 4 integer instructions paralell with each single core.
@ruan13o
@ruan13o 28 дней назад
From my experience, unless I am running a game, my GPU is typically only very lowly utilised while CPU might often be highly utilised. So when we are not running games (or similarly intensive graphics applications) why do computers not send some of the (no graphics) processing to the GPU to help out the CPU? Or does it already do this and I just don't realise?
@christophandre
@christophandre 25 дней назад
That's already the case on some programs. The main reason why you don't see it that often is, that a program must be designed to run (sub)tasks on the GPU. This can make a program a lot more complex really fast, since most programmers don't decide on their own, which part of the program is calculated on which part of the computer. That's done by underlying frameworks (in most cases for good reasons).
@Trickey2413
@Trickey2413 23 дня назад
As the video made it a point to highlight, they excel at different things. Forcing a GPU to do a task it is not optimised to do would be less efficient and vice versa.
@siriusbizniss
@siriusbizniss Месяц назад
Holy Cow I’m ready to be a computer engineer. 👍🏾👌🏾🤓
@mrpappa4105
@mrpappa4105 Месяц назад
If possible, explain why a GPU can not replace a CPU. Great vid but my old vic64 brain (yeah im old) dont get this. Anyway, cheers from a new subscriber
@jozsiolah1435
@jozsiolah1435 Месяц назад
When you stress the system by forcing a dos game to play the intro at very high cpu cycles, you also frce the system to turn on secret accelerating features that remain on. One is the sound acceleratig of sound device, it offlads the cpu from decompressing audio. The ther is the floating point unit, that is off by default, when it is on, some games will become harder. The Intel has an autopilot for car games, that is also off by defaut. With the GPU the secret seems to be the diamnd videos I am experimenting with, many games show diamnds as reward, and hard to get. Diamnd stresses the vga card, it's so compex to draw by the video chip. Also the tuning will consume the battery faster.
@marcopo06
@marcopo06 24 дня назад
👍
@grasz
@grasz 27 дней назад
CISC vs RISC plz
@TechPrepYT
@TechPrepYT 26 дней назад
It's on the list!
@grasz
@grasz 26 дней назад
@@TechPrepYT yay~!!!
@illicitryan
@illicitryan Месяц назад
So let me ask a stupid question, why can't they combine the two and get the best of both worlds?Will doing so negate some functions rendering it useless? Or .. lol just curious 🤔 😊
@riteshdobhal6381
@riteshdobhal6381 Месяц назад
CPU has few cores which is why parallel processing is hard on them but each individual core is extremely powerful. GPU has thousands of cores making them good at parallel processing but each individual core is comparatively not very powerful. You can make a Processor which has multiple cores like GPU and each core being powerful like CPU but that would cost huge amount of money
@trevoro.9731
@trevoro.9731 Месяц назад
The question is stupid. You can either get high performance in non-parallel tasks and low latency or high performance in parallel tasks and high latency, if you extend the normal core with GPU features, it will instantly become large, power hungry and slow for normal tasks, that is why modern CPUs for desktops have bult-in separate GPU cores. Also, the problem is that you would need a lot of memory channels of memory to make use of that - memory for GPUs is very slow, but has a lot of internal channels.
@zoeynewark9774
@zoeynewark9774 Месяц назад
Can you combine a school bus with a Formula One car? There, you have your answer.
@boltez6507
@boltez6507 Месяц назад
APUs do that.
@keithjustinevirgenes7387
@keithjustinevirgenes7387 Месяц назад
What do you think will happen if both the heat of cpu ang separate gpu combined with just one cooling fan and heatsink? Plus greater voltage needs resulting in more heat?
@mostlydaniel
@mostlydaniel Месяц назад
2:34 lol, *core* differences
@lanceorventech6129
@lanceorventech6129 Месяц назад
What about the Threads?
@mattslams-windows7918
@mattslams-windows7918 Месяц назад
A thread is simply a logical unit of sequence of instructions that gets mapped to each core by a scheduler (modern systems usually use both hardware and software scheduling these days) whether that's on a GPU or CPU for execution. Primary difference between GPU and CPU threads is that GPUs usually execute the same "copy" of each instruction across all threads being executed on the device (single-instruction multiple data aka SIMD), whereas CPUs can have all the different threads very easily execute all kinds of different instructions simultaneously (multiple-instruction multiple data aka MIMD). In addition more than one thread can be mapped to each core whether it be a CPU or GPU and in the case that happens simultaneous multi threading (SMT) hardware technology is used to execute all the threads at once.
@Norman-z3s
@Norman-z3s Месяц назад
What is it about AI that requires intense parallel computation?
@undercover4874
@undercover4874 Месяц назад
In neural networks it's all about matrix multiplications and also we want to pass multiple inputs through the network (which also perform a number of matrix multiplication) with GPU we can perform all the passes of different inputs to the neural network parallel instead of just doing one input a time which can speed up the computations
@avalagum7957
@avalagum7957 Месяц назад
Still not clear for me: what component does the GPU miss so that it cannot replace a CPU? Ah, just checked with perplexity ai, the instruction set that a GPU accepts is too poor to make a GPU a replacement for a CPU.
@nakkabadz6443
@nakkabadz6443 Месяц назад
gpu is like PhD holder while the CPU is the jack of all trades. look at the name GPU graphics computing CPU is Central processing unit GPU can outperform cpu computing power in singular task like in graphics computing or any other given task, while the cpu can't compute as fast as the gpu cpu can compute task simultaneously.
@trevoro.9731
@trevoro.9731 Месяц назад
Most of things in this video is BS. GPU can replace CPU, but will work many times slower for most of tasks while taking more power. Only on highly parallel tasks it is efficient and fast. Also, it is missing a lot of features to control hardware, like proper interrupts management etc.
@avalagum7957
@avalagum7957 Месяц назад
@@trevoro.9731 why does GPU is slower than cpu for most of tasks?
@trevoro.9731
@trevoro.9731 Месяц назад
@@avalagum7957 It is optimized to consume minimum amount of energy and perform multiple calculations per cycle, but the calculations take much longer time to finish, up to 100 times slower. All those parallel operations go to waste if you don't need to perform the exact same operation on multiple entries. Also, its memory is way slower than that of CPU (albeit CPU memory is also not very fast, it merely got ~30% faster over the last 20 years, but got a lot of internal channels), but contains a lot of internal channels, so it is efficient in processing large amounts of data, which do not need high performance per each dataset.
@jlelelr
@jlelelr Месяц назад
can cpu have something like cuda?
@mattslams-windows7918
@mattslams-windows7918 Месяц назад
Depends on your definition of "have": if Nvidia makes a GPU driver that supports the CPU in question then technically one can combine an Nvidia GPU and that CPU into the same computer to run CUDA stuff, but executing GPU CUDA code on the CPU is something that Nvidia probably doesn't want people to do since Nvidia likely wants to keep making money on their GPU sales so executing GPU CUDA code on CPU will likely not be a thing anytime soon
@sauravgupta5289
@sauravgupta5289 Месяц назад
Since each core is similar to CPU can we say that it has multiple CPU units?
@aorusgaming5913
@aorusgaming5913 Месяц назад
Does this means that gpu is a better version of cpu only or we can say faster version of cpu which can do many calculation at a time, then why dont we use two gpus instead of a cpu and a gpu?
@undercover4874
@undercover4874 Месяц назад
GPU only performs better if the task being performed can be parallelized, but majority of the tasks can't be or don't need parallel computation So they would be slower on GPU. Main Power GPU gives us is parallelization if can't exploit it in a task the overhead will make it even slower than CPU.
@pear-zq1uj
@pear-zq1uj Месяц назад
No, GPU is like a factory with 100 workers. CPU is like a medical practice with 4 doctors. Neither can do each other's job.
@meroslave
@meroslave Месяц назад
A CPU can never be fully replaced by a GPU, so what happened now between intel and INVIDIA!?
@trevoro.9731
@trevoro.9731 Месяц назад
You are wrong about many things. GPUs, the modern ones, aren't actually good in performing operations for each single pixels. They are far behind the CPU on that, but can more efficiently work with large groups of pixels. No, the modern high-end GPUs have 32-64 cores (top ones like 4090 have 128 cores). The marketing cores is a lie. No, the threads is a lie, the actual number is hundreds times lower. Those fake threads are parallel execution units, they are not threads, they are same code which works with a very large array. Each single core can run 1 or 2 actual threads, therefore the number of threads for high-end GPUs is usually limited to 128 or so. Only because of repetitive operations GPUs are faster in some tasks, they are generally much slower than a crappy processor.
@matteoposi9583
@matteoposi9583 Месяц назад
am i the only one who sees dots in gpu drawing?
@tysonblake515
@tysonblake515 22 дня назад
No you're not! It's an optical illusion
@Theawesomeking4444
@Theawesomeking4444 Месяц назад
No, you didn't explain nor do you understand anything, all you did was read a wikipedia page, gpus dont have "more cores" thats a marketing lie, what they have are very wide simd lanes, cpus have it too but they are smaller in exchange for bigger cache, higher frequency and less heat.
@mattslams-windows7918
@mattslams-windows7918 Месяц назад
Honestly this video despite oversimplifying a bit (due to time constraints likely) isn't entirely incorrect. There are in fact more cores on average in a GPU it's just the architecture of each GPU core is completely different than a CPU core; companies like Nvidia and AMD aren't lying when they talk about their CUDA core/stream processor counts it's just that each core in a GPU serves somewhat different purposes than each core in a CPU is all. Also what you're saying about cache isn't really correct AMD RDNA 2+ cards have a pretty big pool of last level infinity cache that contributes a significant amount to the overall GPU performance
@Theawesomeking4444
@Theawesomeking4444 Месяц назад
@@mattslams-windows7918 The problem is, as a graphics programmer, if i were to learn it again, this video tells me nothing, its literally a presentation a high school student would do for homework "cpus do serial gpus do parallel", the funny thing is that the main differences are literally shown in the first image he showed, where it shows the number of ALUs (the simd lanes, they also have FPUs) of each, yet he didnt explain that because he has no clue what those images mean. now gpus do have slightly more cores than their cpu counterpart, but its usually 1.5x-2x higher, not thousands of cores, if you want the correct terminologies, a cpu core is equivalent to streaming multiprocessor in nvidia , compute unit in amd(funny enough amd also refers to them as cores on their integrated graphics specifications), and core in apple, a cpu thread is warp in nvidia and wavefront in amd, a cpu simd lane is a cuda core in nvidia and stream processor in amd, now for the cache thing you mentioned, you are probably using a gaming pc or console for the comparison, they will usually have 4-8 core cpus with 16-32 core gpus, in games single core performance matters more (usually because most gamedevs dont know how to multithread haha), if you want more less of a comparison take the ryzen 9 5900x 12 core and the rx 6500 16 cores, roughly similar power consumption, L3 cache is 64MB on the cpu and 16MB on the gpu, L2 cache is 6MB on the cpu and 1MB on the gpu, L1 cache is 768KB and 128KB on the gpu, now if you get a gpu with higher core counts, you will notice that L3 cache increases a lot but L1 cache stays the same, this is because L3 cache is a shared memory pool for all of the cores within the gpu and cpu, while L2 and L1 cache are local to the core. anyways that was a long reply, hopefully that answered your questions xD.
@Theawesomeking4444
@Theawesomeking4444 Месяц назад
@@mattslams-windows7918 lol my reply was removed
@JosGeerink
@JosGeerink 26 дней назад
​@@Theawesomeking4444it wasn't?
@Theawesomeking4444
@Theawesomeking4444 26 дней назад
@@JosGeerink nah i had another reply which i explained the technical details but you cant state facts with proof here, unfortunately.
@zoemayne
@zoemayne 17 дней назад
Im just worried they set him up for failure. Those investors gutted the company and sold the most valuable asset - the land they owned. He has my support ill make sure to either stop by there with a group of friends or atart getting some healthy takeout from them. Those bankruptcy investors should be stopped. Just look how they plundered toys r us.
@JuneJulia
@JuneJulia 24 дня назад
Still cant understand wht gpu can do what it does. Bad video.
@Trickey2413
@Trickey2413 23 дня назад
You have a low IQ so you struggle with basic information, its not your fault. Listen again at 0.5x speed and try to comprehend the essence of what he is saying. Take a notes as he lists what each of them do and then highlight the differences between the two. Make sure you make an effort to understand the words he is using, ask yourself "what does he mean when he says this" and try to formulate it in your own words.
Месяц назад
I think pretty much everyone knows the difference between gpu and cpu, the most useful information here would be the "why" the gpu cannot be used as a CPU.
@goodlifesavior
@goodlifesavior 19 дней назад
thanks for foolization we in Russia have no enough our russian foolization and so need to be foolizised by american trainers
@THeXDesK
@THeXDesK Месяц назад
.•*
@johnvcougar
@johnvcougar Месяц назад
RAM actually stands for “Read And write Memory” … 😉
@pgowans
@pgowans Месяц назад
It doesn’t - it’s random access memory
@sauceman2924
@sauceman2924 Месяц назад
stupid 😂
@Trickey2413
@Trickey2413 23 дня назад
Imagine trying to correct someone whilst having the IQ of a carrot.
@Mrmask68
@Mrmask68 4 месяца назад
nice ⛑⛑helpful
@TechPrepYT
@TechPrepYT 3 месяца назад
Thanks!
@mrgran799
@mrgran799 Месяц назад
In the future maybe we will have only one thing.. Cgpu
@nel_tu_
@nel_tu_ Месяц назад
central graphics processing unit?
@thebtm
@thebtm Месяц назад
Cpu/gpu combo units exist with ARM CPUs.
@a-youtube-user
@a-youtube-user Месяц назад
​@@thebtm also with Intel & AMD's APUs
Далее
Do we really need NPUs now?
15:30
Просмотров 684 тыс.
CPU vs GPU vs TPU vs DPU vs QPU
8:25
Просмотров 1,8 млн
How computer processors run conditions and loops
17:03
Просмотров 129 тыс.
The moment we stopped understanding AI [AlexNet]
17:38
How do Video Game Graphics Work?
21:00
Просмотров 3,8 млн
CPU vs GPU (What's the Difference?) - Computerphile
6:39
How 3 Phase Power works: why 3 phases?
14:41
Просмотров 1,4 млн
The 3 Laws of Writing Readable Code
5:28
Просмотров 650 тыс.
HTTP 1 Vs HTTP 2 Vs HTTP 3!
7:37
Просмотров 246 тыс.