Тёмный

Intel and AMD going BEYOND Moore's Law 

Coreteks
Подписаться 135 тыс.
Просмотров 281 тыс.
50% 1

Support me on Patreon: / coreteks
Buy a mug or a t-shirt: teespring.com/stores/coreteks
Follow me on Twitter: / coreteks
Sources and Credits:
Intel keynote: www.intel.com/pressroom/archi...
Geek.com on 10Ghz: www.geek.com/chips/intel-pred...
Anandtech on 10GHz: www.anandtech.com/show/680/6
Soft Machines explanation of how VISC works: • Video
- Videos from the AMD official RU-vid channel were used for illustrative purpose, all copyrights belong to the respective owners, used here under Fair Use.
- Videos from the Intel official RU-vid channel were used for illustrative purpose, all copyrights belong to the respective owners, used here under Fair Use.
- Videos from the Samsung official RU-vid channel were used for illustrative purpose, all copyrights belong to the respective owners, used here under Fair Use.
- A few seconds from several other sources on youtube are used with a transformative nature, for educational purposes. If you haven't been credited please CONTACT ME directly and I will credit your work. Thanks!!
#Intel #AMD #Threadlets

Наука

Опубликовано:

 

7 апр 2019

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 1,2 тыс.   
@alexanderlindsey7134
@alexanderlindsey7134 5 лет назад
There are physical limitations to transistors with current tech. Theoretically limited at 5nm gate width with finfet technology. You get gate leakage due to band to band tunneling. Other effects too but not about to explain. Point is we need a different tech past 5nm.
@thomastmc
@thomastmc 5 лет назад
Enter the quantum computer, sidelining silicon based processing to essentially a thin client model.
@ehenningsen
@ehenningsen 5 лет назад
Josephus Junction issues
@BMac420
@BMac420 5 лет назад
Soundtracked so many issues with quantum computing
@thomastmc
@thomastmc 5 лет назад
​@@BMac420 What's your point? If you haven't been watching, everything has issues. If you're saying that QC isn't practical, or is too far off, then you're just not paying attention.
@BMac420
@BMac420 5 лет назад
Soundtracked it’s not practical for the public at all and I can’t see how it would be barring innovation by AI after the AI singularity. Quantum computers will not become mainstream
@ProjectPhysX
@ProjectPhysX 5 лет назад
You got one thing wrong about parallelism efficiency: Amdahl's law is only half the truth. There is also Gustafson's law stating that as computing problems get larger, their parallelizable part increases. For algorithms like ray tracing or fluid simulations, this means that, given the problem is big enough (which it is), it converges to be 100% parallelizable, scaling perfectly wirh the number of cores / stream processors.
@j4eg3r47
@j4eg3r47 5 лет назад
U said something and I didn't understand but u sir seem knowledgeable.
@j4eg3r47
@j4eg3r47 5 лет назад
@@NoobNoobNews oh thanks man !
@NomoregoodnamesD8
@NomoregoodnamesD8 5 лет назад
the problems that are considered "NP" (Non-deterministic polynomial time) collapse down to polynomial time (orders of magnitude faster) given infinite cores or perfect decision-making.
@gamercatsz5441
@gamercatsz5441 5 лет назад
@@NoobNoobNews U said something and I didn't understand but u sir seem knowledgeable.
@MrThatguy333
@MrThatguy333 5 лет назад
@@NomoregoodnamesD8 U said something and I didn't understand but u sir seem knowledgeable.
@DarioVolaric
@DarioVolaric 5 лет назад
You should start another channel where you read hours long licence agreements with that tone of voice.
@SteamCheese1
@SteamCheese1 5 лет назад
Orgasmic Anit Insomnia ASMR.
@evcrown6958
@evcrown6958 4 года назад
He kind of reminded me of Agent Smith lol
@gs-nq6mw
@gs-nq6mw 4 года назад
He uses those voice tuning software
@chbrules
@chbrules 5 лет назад
I compiled the linux 5.0 kernel on my Threadripper 1900X. Took too long. Need 128 cores already.
@danielmdax
@danielmdax 5 лет назад
2990WX or 2950X for you man... If you use NUMA the first, if not the latter
@coows
@coows 3 года назад
Tip: make your own kernel config and use ccache. Helps a lot. my ryzen 5 2600 with ccache make -j7, which is 7 threads and ccache, my kernel compiles in no more than a minute. Usually a lot less.
@drbali
@drbali 3 года назад
Then you are doing it wrong.
@squirrelgolem
@squirrelgolem 5 лет назад
My god. This changes everything for Dwarf Fortress.
@TruthNerds
@TruthNerds 5 лет назад
Actually doesn't. Your mercenaries will still die of thirst on the top of some tree after hunting for a buzzard, and your wannabe baroness will still go ballistic if you don't make her a quartzite statue of a goldfish before the end of the season.
@mantisnomo5984
@mantisnomo5984 5 лет назад
Data dependency tells me that this claim is probably too good to be true.
@TheDrivenMind
@TheDrivenMind 5 лет назад
That's precisely my thought. This seems short-sighted.
@cookieninja2154
@cookieninja2154 4 года назад
Amdahl's law states that some parts of a program can not be prallelized and therefore any speed up by multithreading is capped by how much of the computation can be done in parallel. 1. The claim that raytracing can only benefit from more cores to a point is only true if we don't increase the number of "rays" we simulate. Say we want to simulate a trillion rays, the cost of distributing a million rays each to a million cores is trivial compared to the workload that each core has to execute, thus each core gives us value. 2. The software that "auto multithreads" can only generate significant speed up if the program under utilizes the the capabilitys of the hardware. That is to say that both the program has tasks that can be done in parallel and the hardware has the spare resources to do so. Everything else will not derive any benefit what so ever. Somehow Coretech manages to undersell the value of more cores and oversell the value of auto parallelization simultaniously. xD
@viktorvaldemar
@viktorvaldemar 5 лет назад
You say that a sequential number of sw-instructions can just be parallelized (even automatically). How would that be possible when the next instruction in the sequence depends on the result of the last instruction? And most software of today takes avantage of multiple cores.
@e1337prodigy
@e1337prodigy 5 лет назад
Absolutely love your videos. Not only is it fascinating what you are saying (and sometimes confusing to get my head around) but also just the video without you talking is fascinating; sometimes I have to pause on a bit and just read some of data or information on screen. I wish I understood this more. Technology is amazing!
@DemiImp
@DemiImp 5 лет назад
Wait wait wait wait, hold up. One does not simply multi-thread code. There's only certain segments of code that can actually be run in parallel and knowing what can and what can't can be extraordinarily difficult. Other times, you need to do extra steps, duplicate memory, have extra variables, recombine things post parallel segments, etc. It is not trivial. And no hardware can just parallelize things automatically without knowing the full code and what can interact with what. There's something called OpenMP that makes parallelizing loops easy, but there's always a startup cost do creating threads that can hurt performance if the loop is small or not complicated enough to warrant it. Idk who you talked to about automatically threading processes, but they weren't telling you the whole picture or you misunderstood what they were saying. At best, multiple cores could share their arithmetic units like hyper threading does with multiple processes.
@PanduPoluan
@PanduPoluan 4 года назад
THIS. Writing multithreading code is ... hair-raising. There are race conditions, thread deadlocks, and even problems of shared data. There's a reason why most programs are not multithreaded: The effort to debug problems increases significantly. It's a tradeoff between performance increase -- which won't ever scale linearly to the number of threads -- and ease of troubleshooting/maintenance. Also: Multithreading is not the be-all-end-all solution for performance limitation. Some logic are unavoidably sequential (e.g.: Transferring money from one bank account to another). Split this into threads and the changed order will potentially cause great grief. Games might also suffer. Imagine if the engine performs two possibly parallel computing in sequence: Move target character, move bullet, detect if bullet hit target's hitbox. This can theoretically be deconstructed into two threads (1) move target character, (2) move bullet (and detect hit). If (2) finishes before (1) -- which is very likely to happen since (1) is more complex -- then it might cause a hit where the sequential process wouldn't, or the other way around, a miss where the sequential process would hit. So, even if there are *safe* ways -- using AI, ML, or whathaveyou -- to "automatically" convert single-threaded processes to multi-threaded processes, I don't believe the performance increase would be significant. Simply because there are too many places where such conversion will cause problems. Not to mention an exponentially increasing difficulty in troubleshooting, as the "automatic conversion" might introduce bugs not visible through step-by-step debuggers.
@iridium9512
@iridium9512 5 лет назад
As someone who tried multiple times to enhance performance of my code using multiple threads, I can say, it is strangely convoluted. You'd say you can just select portion of code and tell it to run on different thread and that'd be the end of it, but no. It seems computer is unaware what core is most free to do anything, and sometimes even randomly put load on core that is already doing something. That way your program can even take longer to do the same thing. Futhermore, it may take too long to initiate another core to do anything, so you loose large amount of time doing nothing waiting for core to begin doing anything. And if you do it improperly code on a different core will not even start before code on previous core is done. So you need a special coding just to make sure parallel execution works even while on different cores. Add to all that data about multithreading coding is very scarce, and you can see why people are not really thrilled about coding on multiple threads. I'm no expert. This is just my limited beginner's experience.
@Weaseldog2001
@Weaseldog2001 5 лет назад
@@spell105 That isn't really relevant if running on multiple cores. But it does matter if in you're running multiple threads on a single core.
@Weaseldog2001
@Weaseldog2001 5 лет назад
In practice we usually have different subsystems in a program running in different threads. The GUI or console on the main thread. A subsystem that handles TCP traffic may be on another thread. Then a state engine running the business code, still on another thread. And maybe database subsystems running multiple threads. in this kind of model, each system would be running asynchronously. Typically they would communicate using lockable queues. What I suspect you might be attempting is to have multiple threads tackle different dimensions in a dataset. This is effectively managed by defining the stages of your data processing and have a pool of threads. Your data goes into a queue, and the threads grab data from the queue, process it, and then when done, put the results in another queue, then go back for another set of data. This requires some up front planning to work efficiently. I've used it to good effect in image processing apps.
@Guztav1337
@Guztav1337 5 лет назад
Hence why these new systems that could do some parallelism optimizations automatically, removing the hassle, is awesome.
@jnwms
@jnwms 5 лет назад
Have you tried using the command pattern?
@Weaseldog2001
@Weaseldog2001 5 лет назад
@@jnwms I prefer the command pattern over typical state engines. But alas, most programmers don't know what it is or how useful it can be. If we envision a factory floor as the state engine. In a typical system, an order comes in and all of tbe machines interact with each other with intimate knowledge of other's internal workings, to fulfill the order. In code, this makes spaghetti code easy to write and discourages proper encapsulation. With the command model, the machines are all discrete, and the order comes in with the command. Tbe command acts as a worker, to fulfill the order, using the discrete machines and their APIs. This takes a lot if confusion. Out of code and reduces complexity. It avoids the need for massive switch statements.
@MatrixJockey
@MatrixJockey 5 лет назад
I've been waiting for a new video from you or AdoredTV. It's about time!
@marcobonera838
@marcobonera838 5 лет назад
why not a crossover?
@unclerubo
@unclerubo 5 лет назад
@@marcobonera838 AdoredTeks?
@Tjecktjeck
@Tjecktjeck 5 лет назад
Because he's is a stubborn fanboy. Mark my words - he's gona change shoes once again when it will turn out that Zen 2 wont have 16 cores.
@marcobonera838
@marcobonera838 5 лет назад
@@unclerubo core tv 😂
@marcobonera838
@marcobonera838 5 лет назад
@@Tjecktjeck who cares how many cores ryzen will have, as long as it's more than Intel's. But why do you think so?
@finnsk3
@finnsk3 5 лет назад
Coretek is obviously not a developer. There are fundamental limitations that stop you splitting up a single threaded task. You require the result from the previous calculation to perform the next.
@kazedcat
@kazedcat 5 лет назад
That is why you need AI to predict the output and feed it to the next step before the previous step is finish. This is how branch prediction and cache prefetching speed up superscalar processor. You just need to scale this up to thread level a superthreading of sorts. You need a more complicated predicting system than the ones use in superscalar processing. This means you need Machine learning as bases for your prediction system. The javascript engine of some web browser is doing something similar to this they are predicting the variable types on javascript code so that they can precompile the code and just recompile it if the prediction is wrong.
@finnsk3
@finnsk3 5 лет назад
@@kazedcat I highly doubt it would be benificial in most workloads.
@kazedcat
@kazedcat 5 лет назад
@@htko89 Not all outputs can be predicted. But it is very common for programs to have repeatitive action or have binary output. For repeatitive action you just predict that the output is also repeated. For binary output you pick one. This is the same stuff with branch prediction that is happening with superscalar processor. Only on the thread level and you have OS level context and have application profiling as guides. The goal is not to speed up the worst case but to speed up the common case. Most application is not optimised for multithreading so there is a lot of low hanging fruit that can be harvested.
@Guztav1337
@Guztav1337 5 лет назад
Yeah, predictions on our outputs are already happening, and have been a thing for years. That's why the infamous specter and meltdown bugs were a thing.
@kazedcat
@kazedcat 5 лет назад
@agr metor Yes but we are trying to solve the problem of too many cores and how to make them useful instead of them staying idle. The cache will increase. Due to diminishing returns cache is limited to profiled demand. If cache demand increases they will sacrifice a few cores and use the freed up transistor count to add more cache. Zen2 is doubling the per core L3 size and quadrupling the on package L3 size. So cache size is increasing faster than core count. Expect that trend to continue. Adding cache is easier than adding cores.
@eastcoasthandle
@eastcoasthandle 5 лет назад
This tech reminds me of Process Lasso, ProBalance.
@456MrPeople
@456MrPeople 5 лет назад
This is on the hardware level though not software
@eastcoasthandle
@eastcoasthandle 5 лет назад
@@456MrPeople Understood. But I wonder what differences there are? I've heard that some online game anti-cheats prohibit Process Lasso. Which is why I stopped using it.
@samixz
@samixz 5 лет назад
@@eastcoasthandle "You heard" and "some online games" is not a good reason to stop using it.
@eastcoasthandle
@eastcoasthandle 5 лет назад
@@samixz Was it ever addressed otherwise? That's the point to my reply not about what I think I know or heard. Furthermore, what benefit does it offer using today's cpus? Are there current video demonstrations showing how using it benefits us in games today?
@samixz
@samixz 5 лет назад
Video demonstrations? I don't really know. However I do know it helps me when I run 22 game clients on my computer at the same time!
@igorthelight
@igorthelight 5 лет назад
9:13 - The most often Services that cause slowdowns are: * Windows Autoupdate * Windows Background Intelligent Transfer Service (BITS) * Windows Indexing Great video btw!
@testikuskitestdrivr6012
@testikuskitestdrivr6012 5 лет назад
32 years on the PC and it never seems to stop amaze me. Thanks for this interesting video, love being educated, interesting and it has been a while for these kinds of innovations to happen. Seen your stuff before, but now I've subbed.
@Luorax
@Luorax 5 лет назад
Glad to see a new upload pop up, been waiting for a while for a fresh video to tell how insightful and educational I find this channel. So I'd like to take this opportunity to say a big thank you for all the hard work that must go into every single one of these videos.
@InnSewerAnts
@InnSewerAnts 5 лет назад
The gigahertz "predictions" back then were very naive and not all that informed on the technology. In a sense they were right, but frequency is only how many cycles a cpu goes through per second. The biggest performance gain isn't in doing more cycles per second, although that of course helps, the big difference is how much can the cpu do per cycle and why some cpu's are faster with a lower clock than others with higher clock. But in a sense they got it fairly correct in performance terms, We'll have to compare in terms of flops (floating point operations per second) A 2010 intel core i7-980x extreme edition can pump out 147.6 billion instructions per second (147,600 MIPS) at 3.33 GHz (so around 40 billion instructions per second per core). A 1999 Intel Pentium III @ 600 MHz can pump out 2 billion instructions per second. So it would need to run at around 12 GHz to do the same amount as 1 core of the I7 example, meaning the full I7 is the equivalent of a 44 GHz Pentium III. Overly simplified and only based on floating point operations but not an invalid argument I think. Other equivalent hypothetical CPU's to get this performance-ish: Intel i386DX @ 1.1 THz (~143,350 MIPS) Intel i486DX @ 424 GHz (~147,550 MIPS) Intel Pentium @ 78.5 GHz (~147,550 MIPS) Intel Pentium 4 EE @ 48.5 GHz (~147,400 MIPS) AMD Athlon 64 3800+ X2 @ 20 GHz (~145,650 MIPS) AMD Phenom II X6 1100T @ 6.1 GHz (~145,000 MIPS) AMD Ryzen 7 1800X @ 1.7 GHz (~143,800 MIPS) Also getting smaller is reaching it's limit actually, we've reached the point where it's so small and everything so close together quantum effects start screwing things up. There's a fundamental limit to how small transistors and other features can be and we're just about there, that's why companies like Intel are exploring other materials like graphene, concepts for utilizing more 3D space in CPU design and other avenues to pass this barrier.. And no, you'll never have picometre scale cpu's :'). 1 picometre is far less than the size of an atom, you know the stuff matter is made of :'). Just saying, 1 single helium atom has diameter of ~ 62 picometre. A silicon atom itself is already 0.11 nanometre. Transistor research in labs is already at transistors of 1 graphene atom thick, 10 wide or such. Silicon has reached it's miniaturization limit and graphene isn't far behind. Because you will run into quantum problems and well the size of atoms themselves.
@Ayumu88
@Ayumu88 5 лет назад
Capitalism is the one thing that is guaranteed to go beyond Moore's Law.
@red2theelectricboogaloo961
@red2theelectricboogaloo961 5 лет назад
yes.
@MrAmalthejus
@MrAmalthejus 4 года назад
Also guarantees wealth inequalities and inhumane, unsustainable development
@vangildermichael1767
@vangildermichael1767 4 года назад
not even. God will be back for his flock very soon. And this world will accept the beast system. If you don't agree, you can talk with God about that one. He already told me that is exactly how it will be.
@user-rd5nc1nb9f
@user-rd5nc1nb9f 4 года назад
@@MrAmalthejus capitalism is the system that gives the less bad side effects and inequalities. Socialism is a meme
@MrAmalthejus
@MrAmalthejus 4 года назад
@@vangildermichael1767 you are delusional, see a doc
@thathandsomedevil0828
@thathandsomedevil0828 5 лет назад
I now imagine you recording the audio for your videos almost fully reclined on your most comfortable chair a standing microphone not too far from your face, Cuban cigar lit, on your left hand, a glass of whiskey on your right spitting mad info into the mic. Thanks for the video!
@lubsch1918
@lubsch1918 5 лет назад
Great Video! I really heven't been aware of this posibility before at all. Thank you!
@1Shingami
@1Shingami 5 лет назад
liked and shared :) I hate to see how you got a copyright claim makes me mad
@TashiRogo
@TashiRogo 5 лет назад
That means he was over the proper target.
@rasmusdamgaardnielsen2190
@rasmusdamgaardnielsen2190 5 лет назад
Was that a whole video about instruction level parallelism (out-of-order execution) without once mentioning that it already exists? Any modern CPU core will reorder independent instructions, distribute them across multiple excecution units (Ports) and run them in parallel. It is hard to see what in all this is new.
@Coreteks
@Coreteks 5 лет назад
This is not the same as OoO
@rasmusdamgaardnielsen2190
@rasmusdamgaardnielsen2190 5 лет назад
@@Coreteks sure, but how? Why should the scheduler be able to infer dependencies better than corrent hardware? My point is just that the idea, with a scheaduler which sends independent workloads to different excecution units is in no way a new one, so i have a hard time seeing what is. But i realy respect you answering critical comments, thanks for that! Really like your content
@ARBB1
@ARBB1 5 лет назад
Very handy for current discussions I'm having.
@libertycentral6564
@libertycentral6564 5 лет назад
Great video! thanks for producing.
@synbios2009
@synbios2009 5 лет назад
This channel is amazing. I just found it today and I've already watched 3 hours of content. Amazing work.
@BrianMacocq
@BrianMacocq 5 лет назад
Didn't AMD already have "Neural Net Prediction Technology" in Ryzen?
@Coreteks
@Coreteks 5 лет назад
You mean SenseMI? I believe that's for branch prediction, which is not quite the same thing. Prediction and Speculative Execution have been around for a long long time (since 386 at least).
@williamforbes6919
@williamforbes6919 5 лет назад
They are using a perceptron network, versus a neural network which can describe a miltilayer perceptron network. The single layer perceptron network gives them pretty good branch prediction via ML, but it can only find linearly seperable patters with which to predict the next instruction. Feed forward neural networks can find nonlinear seperable patters with a corresponding differential order equal to the number of layers. There are some super nutty network designs that sidestep this and allow for arbitrary dimensionality from a single layer network, but that is probably outside the scope of what we would see in use here.
@BrianMacocq
@BrianMacocq 5 лет назад
@@Coreteks True, at the time it struck me that they emphasized the machine intelligence, learning and adopting. Anyway great video!
@craiglortie8483
@craiglortie8483 5 лет назад
this is why i subbed to you. always opening minds to new ideas and not afraid to make us think in new ways!
@LouisSaver2012
@LouisSaver2012 4 года назад
I really love your videos so far. Keep up with the good work!
@marcobonera838
@marcobonera838 5 лет назад
14:33 how cuuute! someone made a customized render of the yet to be announced ryzen, resisting the urge to add the second chiplet!
@CossackHD
@CossackHD 5 лет назад
You mean the still image? The animated one looks like a CPU with ESRAM.
@marcobonera838
@marcobonera838 5 лет назад
@@CossackHD where's the animated one?
@CossackHD
@CossackHD 5 лет назад
@@marcobonera838 next clip after static image at 14:33
@zelexi
@zelexi 5 лет назад
What you are proposing is highly improbable. More likely this tech is going to distribute work between fast/slow cores more power efficiently. The reason what you propose is unlikely is because in order to know if code is data parallel, you have to actually execute it. Thus what you propose is essentially impossible without additional software tooling.
@Coreteks
@Coreteks 5 лет назад
I'll share more once patents are filed and more info can be made public. I'll do a follow up video taking a closer look at how this works.
@chainingsolid
@chainingsolid 5 лет назад
@@Coreteks I'd like to point out that the work on parallelizing single threaded code, in hardware is something most cpus(desktop/server atleast) already do, its a better idea as you don't need to get the coders to change anything, so I would suspect much of the potential speed up in this area has been gotten. What is left is to dangle raw computing power in terms of more cores to the devs and wait for them to jump on it.
@rob99201
@rob99201 5 лет назад
@@Coreteks Replayed this again - a point of confusion (maybe only for me) but the demos you saw (and recorded for us here) of the performance between games and the CPU stats - this wasn't Soft Machines tech was it? Because it doesn't make sense that it can work on different architectures if so since Intel bought them. Is this an unnamed new company working on something on a new 'angle' toward the multi core / software problem?
@Coreteks
@Coreteks 5 лет назад
@@rob99201 Hey. The demos are from a new company, yes.
@rob99201
@rob99201 5 лет назад
@@Coreteks Awesome! Thanks for the follow up! Do keep us informed - as Rick says, "Gotta keep those Coors {erhp!) engaged!"
@westsnest2273
@westsnest2273 5 лет назад
RU-vidr CoreTeks is self-evidently more introspective and articulate throughout his presentations than most "TechTubers" who, unfortunate as it may be, have appealed to far greater viewership and fanfare than he has thus far. While other RU-vidrs continuously clog up our video queues with thoughtless filler "content", CoreTeks has maintained journalistic integrity by sharing ideas carefully and concisely. Anecdotal ideas and personal wisdom are to be found as well, and keep things interesting. Subscribed. Thank you. Keep up the good work, Sir!.
@osvldo
@osvldo 5 лет назад
Thank you so much for your quality work !
@timothyhaug2060
@timothyhaug2060 5 лет назад
Only one problem from the beginning of your video. That can't keep going smaller and smaller. The reason being that at not much smaller than they are already at, quantum tunneling occurs.
@swift3637
@swift3637 5 лет назад
Yup exactly! I think 5nm is the max we can achieve with the "all around transistor" Definitely will be a great transistor when we get there
@MightyElemental
@MightyElemental 5 лет назад
Below 7nm transistors can experience quantum tunnelling though, right? Making 3:45 inaccurate.
@BattousaiHBr
@BattousaiHBr 5 лет назад
technically it already happens even way above 7nm, just not as often and not as big of a deal as in lower sizes.
@0Wayland
@0Wayland 5 лет назад
As always, amazing job!
@SlaweETV
@SlaweETV 5 лет назад
Thank you for those videos, very good content to learn. Keep up the work hope you hit millions soon.
@kostastzimas8650
@kostastzimas8650 5 лет назад
omg , how did 20 minutes passed so fast...?
@lawthirtyfour2953
@lawthirtyfour2953 4 года назад
Because I'm watching at 3x speed
@davidr.massey419
@davidr.massey419 5 лет назад
IBM has a patent(2012) on their GRAPHENE chip supposed to be 22 times faster than anything Intel has. The only thing curtailing advancements is more economics than tech.
@OfficialFront
@OfficialFront 5 лет назад
owning a patent does not mean you are able to use the technique in a series or mass production scenario
@BattousaiHBr
@BattousaiHBr 5 лет назад
a patent is only a proof of concept. just because something can be done doesn't mean it can be done economically. even if you disregard money completely, precious resources from the real world (rare materials, energy, time) are still being invested to make it happen and sometimes the effort just isn't worth it.
@topdecker1334
@topdecker1334 5 лет назад
I hope your work will never end!
@David-kd7ko
@David-kd7ko 5 лет назад
I just want to let you know that after a long night of work, coming home to see you've uploaded another video absolutely makes my day. Thank you so much for your hard work and effort, they're much appreciated.
@WatchingFromHeaven
@WatchingFromHeaven 5 лет назад
2:45 - just huge respect for this type of mind
@igorthelight
@igorthelight 5 лет назад
True! A proper human :-)
@skaltura
@skaltura 5 лет назад
It shows Coreteks is not a programmer. Parallel code is not that difficult (seriously); It's all about splitting tasks. Amdahl's law concerns single task ... not even single application :) There is always a lot of tasks running. Workstation i am writing this on is running 162 tasks on 882 threads. So for me, performance boost is not limited to 16 cores even with today's coding. For example, this very page, single tab on browser, could be made to scale to 100s of threads - and it's not even "that difficult" - You simply split the tasks up. Very same thing you do with when you manage huge amounts of data - You split it up in organized manner. This is why "nosql" was such a big deal and huge hype some years ago; It forced non-skilled database admins to split up and index their data :) Parallelizing is only hard when you have to parallize single task, for example rendering a single image. Difficulty depends on data format. But there is nothing stopping you from doing a single image render on 10 000 threads simultaneously other than available performance and acceptable latency as I/O channels filling up will increase latency. The true bottleneck is I/O - and has always been so. That's why AMD is so heavily emphasizing I/O.
@uzefulvideos3440
@uzefulvideos3440 5 лет назад
I found it far more annoying that he always called computer programs "code". This isn't entirely wrong as machine code is also code, but it is very misleading.
@dariusduesentrieb
@dariusduesentrieb 5 лет назад
"Parallel code is not that difficult (seriously)". It is much less easy than single thread programs. You need a deeper understanding of the architecture, you need to manage all the threads with all that shared memory properly, you have to find tasks that can be split up which may involve designing a new complex algorithm that can run in parallel. Anyway, I agree with you that this video describes it like it is almost impossible to code multithreaded programs.
@skaltura
@skaltura 5 лет назад
@@dariusduesentrieb Yes, some knowledge of how computing works on a low level is required; Which any person coding anything performance critical should know already. It's much less difficult than non-coders, and low skilled coders make it to be (unfortunately, i consider ~90% of coders to be low skilled). Basics are very simple -> Split up the data. For example a old piece of code i wrote in the 90s, at a time the only practical way to get multiple cores was having multi cpu system, i had timers on one thread, sound in one thread, and OpenGL commands in one thread. Adding more threads would've been simple, but there was no benefit at the time (plus it ran like 200fps anyways). To further split that up, if there would've been a need, i would've added a thread per audio channel (for SFX), separate thread for texture loading, geometry calcs (ie. order in which to draw polygons and perspective/camera), certain effects would've been on separate threads (distinct draw order), a separate thread to manage the actual calls to GPU (instead of calculating when making the call certain aspects) etc etc etc. You split the tasks up. When you get to something which is single task (single piece of data) the Amdahl's law may apply to your case, but often it is still just figuring out the separate tasks. Data separation is the keyword; Then you do not need any code to manage multi-threaded memory access nor give it any thought. But when you need same data for many threads, often you can figure out a atomic method; For example, you can use RAM like a filesystem under Linux at pretty much full ram performance in every regard, but as you manipulate it like an FS, you have atomic functions to utilize; Thus no conflicts etc. each thread can access same piece of data with nearly 0 performance penalty. An example would be that i wrote a caching mechanism 100% in PHP which out performs memcached by orders of magnitude, by utilizing shm directly. memcached is supposedly high performance ram caching solution ... and the raytracing only scales 95% is completely bullshit, you can spawn a thread for every single ray if you want to. Your bottleneck is speed of I/O (ie. cache, ram and so forth), computing wise ray tracing is pretty much perfectly scaling task.
@dariusduesentrieb
@dariusduesentrieb 5 лет назад
@@skaltura "(unfortunately, I consider ~90% of coders to be low skilled)" lol :)
@Delta8Raven
@Delta8Raven 5 лет назад
​@@skaltura You're just wrong about amdahl. It's just a mathematical way of saying a parallelised function is as fast as the slowest sequential set of instructions in it, be it IO or what may have you. Though you are right about pretty much everything else.
@joaoveiga7816
@joaoveiga7816 5 лет назад
Amazing video. Thanks for the info and the hard work.
@williamforbes6919
@williamforbes6919 5 лет назад
Awesome video man, your depth of research is impressive. Glad to see your thoughts on the implications of a Soft Machines VISC style solution to the current scaling challenges.
@JohnnyWednesday
@JohnnyWednesday 5 лет назад
You should consult more programmers. The bias towards procedural code? has less to do with a lack of skills than you appear to suggest. I can assure you that in the games industry at least? DX12 and Vulkan have greatly changed the way engines are designed. It is NOT a lack of skill. Any idiot can download a lock-free queue library and saturate their CPU within spitting distance of the most hardcore code out there. What you state regarding programmers? that's like claiming that CPU branch-prediction isn't perfect because the people that implemented it are incompetent. And there being a core limit to performance? you've heard of super-computers, right?
@Lightitupp1
@Lightitupp1 5 лет назад
Think he meant that the core limit is in regards to things like games, not the server market.
@peyopeev8909
@peyopeev8909 5 лет назад
And applications, not massive simulations.
@evertchin
@evertchin 5 лет назад
super-computers usually dont do time-critical computations hence the ipc delay doesn't impact performance as much but business/consumer/gaming computers usually do.
@TheArakan94
@TheArakan94 5 лет назад
@@Lightitupp1 he ignores Gustaffson's law when talking about Amdahl.. More cores enable us to do bigger tasks at same speed - and that's something gamers really want (bigger world, more NPCs, better AI, etc.)
@Sorestlor
@Sorestlor 5 лет назад
Games are like a large simulation. 32 cores for the economy, 32 cores for the physics simulations, 32 cores to work with the gpu on graphics stuff, 32 cores for world simulation, and whatever is left for all your npc's.
@GoodGamer3000
@GoodGamer3000 5 лет назад
I'm here from the future to make fun of you. You got it completely wrong, it actually got 7x faster.
@besssam
@besssam 5 лет назад
ROFL
@lucaschacon7436
@lucaschacon7436 5 лет назад
Thanks man, your videos are opening my mind.
@MrPresident__
@MrPresident__ 5 лет назад
Whats intro/outro song i love it! great work again as always! also, have u considered LTTs Floatplane as an alternative to youtube?
@marcobonera838
@marcobonera838 5 лет назад
instaliked! don't know if it's for the deep, relaxing voice, the music, the graphs, the actual script, or the vintage images always present at the beginning. maybe all of them. however, if you can, please enable subtitles :)
@mantisnomo5984
@mantisnomo5984 5 лет назад
It's for the wishful thinking.
@Macatho
@Macatho 5 лет назад
I found the voice to be somewhat fake in the enthusiasm/mystique it conveys.
@rinkedink
@rinkedink 5 лет назад
Sounds like Agent Smith
@SSHayden
@SSHayden 5 лет назад
So Intel basically hijacked that company from their software ever seeing other platforms, aside Intle's chips... Sad.
@Coreteks
@Coreteks 5 лет назад
Well, it's one way to stay innovative. They also get a team of talented chip designers, so it's a wise business move. Hopefully they actually make good use of the IP.
@Zecuto
@Zecuto 5 лет назад
@@Coreteks any chance whatever they come up with will be available for cross licensing agreement they have with AMD, like AVX, for example? Even if Intel gets couple year lead with new technology, it shouldn't be that we are left with no alternatives, hopefully.
@MrMattumbo
@MrMattumbo 5 лет назад
@@Zecuto Well think about it, he was shown a demo from another company demonstrating their version of the tech, granted it was on phones so maybe it's only for ARM-based CPUs, but it proves the technology is being developed in parallel by other firms and should be available to AMD when they decide to pursue it (assuming they are not already).
@Coreteks
@Coreteks 5 лет назад
@@Zecuto I can't say much for now, but it's unlikely. Just like some of the speculative tech inside Intel's chips (for better & worse) are exclusive to them.
@kkgt6591
@kkgt6591 5 лет назад
Wow no shouting, no ridiculous faces as thumbnails, this channel is top notch.
@ahmedp800
@ahmedp800 5 лет назад
Exciting! I am so glad I found your channel! BTW you never mentioned anything about 64-bit., I am guessing if software is properly optimized that would be another boost in performance.
@sinephase
@sinephase 5 лет назад
well, if you look at parallel processing like multiples of frequency, they weren't far off. it just ended up making more sense to do multicore instead for a lot of reasons.
@AhidoMikaro
@AhidoMikaro 5 лет назад
Except they aren't multiples of frequencies.
@KaiserTom
@KaiserTom 5 лет назад
​@@AhidoMikaro For many applications, 2 cores at 3 Ghz ~= 1 core at 6 Ghz (not to mention the former uses 2-4 times less power if not more), since the vast majority of applications aren't that strictly sequential. In 2011 the best CPU to come out was probably the i7-3960X which had 6 cores plus hyperthreading at 3.3 Ghz. 3.3 * 6 ~= 19.8 Ghz-Cores of power. HT generally boosts that up by about 30% to 25.7 Ghz-Cores (it would be nice to use something like FLOPS for this but I would need predictions from 2000 in terms of FLOPS). Various architecture optimizations also boost it up, but just from those values alone we've blown past the 10 Ghz-cores mark that was predicted. Even considering Amdahl's law, at only 6/12 cores we still don't hit the diminishing returns too hard for many applications, especially since real-world use has cores being dedicated to certain things like the OS which gives an entire free set of cores towards more demanding applications, which means an application that makes no direct use of multicore architecture still benefit from it noticeably (no longer being interrupted so the OS kernel or web browser you also have running can do whatever random thing).
@sinephase
@sinephase 5 лет назад
@@AhidoMikaro I didn't say they are?
@sinephase
@sinephase 5 лет назад
@@KaiserTom For sure. I'd rather have 2x3Ghz cores running 2 programs simultaneously than 1x6Ghz core trying to divide itself between the 2.
@rixxir
@rixxir 5 лет назад
Parallel programming is fairly easy, but it haves its limitations - not everything can be made parallel. Only operations that are logically seperated can be parallel - otherwise you lose a lot of time on thread synchronization. I doubt that AI-based core management will work properly, but we will see.
@dkevans
@dkevans 5 лет назад
I recall in the late 90s reading about Transmeta using parallel processing to take batches of CPU-level instructions, turn them into Very Long Instruction Word sets, parse each instruction and reorder the output so that you consistently get FIFO ordering within the processor itself.
@selsuru
@selsuru 5 лет назад
The viscus architecture sounds really cool
@piotrfila3684
@piotrfila3684 5 лет назад
Compilers could also be vastly improved. Code could be parallelized during compilation possibly resulting in even higher performance increase than when using the 'AI' shown. You can't really expect all programmers to suddenly change habits, but you can change the tools they use. Besides that computers can optimize code much better than humans.
@Coreteks
@Coreteks 5 лет назад
This is true but running things at compile-time hasn't been great for a lot of things. You can see how some native apps on macOS run so well because they execute at runtime instead (compared to windows). Final Cut Pro is a good example.
@piotrfila3684
@piotrfila3684 5 лет назад
@@Coreteks I have always assumed compile-time optimization was superior but admittedly you do have a lot more information during runtime. (btw including me in the video was pretty cool :) Don't worry, everyone is wrong sometimes, just not everyone wants to admit it and besides that the future is never certain in this field.Who knows, maybe the next big thing is something entirely different ¯\_(ツ)_/¯)
@Coreteks
@Coreteks 5 лет назад
@@piotrfila3684 Predictions are hard, especially about the future ;)
@bultvidxxxix9973
@bultvidxxxix9973 5 лет назад
Like en.wikipedia.org/wiki/Automatic_vectorization ?
@vimmaster1526
@vimmaster1526 5 лет назад
"we have doubled the performance of CPU" - we will never hear, much more profitable for AMD and INTEL to provide processors in smalls steps, improving some new features, energy savings, 10-20% higher speed etc.
@sack8439
@sack8439 5 лет назад
Exactly, if they didnt bother with making as much money as possible at an early stage we would be so advanced in microprocessors they would be lsughing at us right now. But we just had to have greedy companies....
@Greywander87
@Greywander87 5 лет назад
This is a distinct possibility. It does carry the risk that one of them will decide to go ahead and release an amazing CPU that blows their competitor out of the water, but it isn't hard to imagine that they've come to some sort of agreement not to do something like that. There's still hope, though, as there are other CPU manufacturers out there, and I think it's just a matter of time before some upstart with a dream comes along and pulls the rug out from under them. They can only hold on to such tech for so long before someone else releases it anyway.
@JirayD
@JirayD 5 лет назад
Where did you get that phenomenal picture of the Ryzen 3k Engineering sample at 14:20?
@piotrfila3684
@piotrfila3684 5 лет назад
coreteks: uploads a video me: cool me: realizes I was featured in the video me: HOLY SHIT
@Anders01
@Anders01 5 лет назад
Interesting automation of parallelizing software. However, notice that Huawei recently launched Mate X, a foldable phone that is years ahead of anything Apple has. Huawei has been working on it for 3 years the CEO I think it was said. The same will happen with CPUs I predict. Huawei and/or other big Asian tech companies will soon release ARM CPUs with amazing price/performance. The x86 architecture will no doubt remain for many years but I expect the ARM (or some new) architecture to gradually overtake it.
@BattousaiHBr
@BattousaiHBr 5 лет назад
the problem with replacing x86 is all the compatibility issues. i can see it being replaced eventually, but certainly not for ARM. it'll have to be some new and very mature ISA that is significantly better than x86 to make up for the effort of migrating.
@GeorgeTsiros
@GeorgeTsiros 5 лет назад
all these graphs, for good or bad, omit the most important metric of performance, which COMPLETELY overshadows all other hardware metrics: software quality. You guys want faster computers? Start demanding higher quality software. edit: ah, you kinda cover this, too, nice.
@dangerfar
@dangerfar 5 лет назад
Incredibly high quality video, thank you for all the knowledge.
@Fee.1
@Fee.1 5 лет назад
Did you leave hints/Easter eggs during the video that hint at the company you were speaking of ? And was it one company or multiple that were being courted?
@MrC0MPUT3R
@MrC0MPUT3R 5 лет назад
Knock, knock. Race Condition.
@franklincollintamboto8637
@franklincollintamboto8637 5 лет назад
Knock knock, segmentation fault
@korakys
@korakys 5 лет назад
Liked after watching. You, sir, are interesting.
@WillArtie
@WillArtie 5 лет назад
Great vid - first Ive watched of yours.
@gaijinkuri684
@gaijinkuri684 5 лет назад
Your voice is amazing. So easy to listen to. So relaxing
@TheBinaryHappiness
@TheBinaryHappiness 5 лет назад
Liked before watching, hell yeah! You are awesome, sir.
@peterwilkinson1975
@peterwilkinson1975 5 лет назад
when u drop what ur doing to watch this video :D
@Muratcharms
@Muratcharms 5 лет назад
BinaryHappiness so what u do when u like to see content is uploaded from your favorite chanel but not the content itself?
@jonathanlebon9705
@jonathanlebon9705 5 лет назад
Was supposed to sit down with the Mrs amd watch something..saw your video pop up amd was like (hang on babe...gimme 20). +1 for the prediction..we should indeed be positive for the future..excited, even, as both competition and curremt cpu limitations will drive innovation. See you in 2 years. ;)
@jeremijakrstic1968
@jeremijakrstic1968 5 лет назад
As always, top sort of content. Keep on doing good work.
@MatthewXLY
@MatthewXLY 5 лет назад
Just subbed. Now I have another techtuber whose videos i look forward to as much as AdoredTV.
@JJsTechEsports
@JJsTechEsports 5 лет назад
Great video. Tiny nitpick: Power increase when pushing frequency is linear though, more frequency requires more voltage that's squared in the equation, so the scaling is squared/quadratic, not exponential.
@BattousaiHBr
@BattousaiHBr 5 лет назад
isn't squared the same as exponential?
@JJsTechEsports
@JJsTechEsports 5 лет назад
@@BattousaiHBr No, check out wolframalpha and plot 2^x and then plot x^2, you'll see that exponential increase is bonkers compared to the squared growth
@BattousaiHBr
@BattousaiHBr 5 лет назад
@@JJsTechEsports depends on what x is in 2^x. if it was 2 then it is literally the same thing as squared. in the case of x^2, 2 is still the exponential of x. i think we're arguing semantics lol
@shresthabijay26
@shresthabijay26 4 года назад
@@BattousaiHBr No, its not semantics. You have some misconceptions regarding functions.
@Luca-oc8iw
@Luca-oc8iw 5 лет назад
After 15 seconds I liked! Top creator
@melio6768
@melio6768 5 лет назад
The quality of your videos is outstanding
@buenaventuralosgrandes9266
@buenaventuralosgrandes9266 5 лет назад
Yeah as an EE minoring Computer Engineering, i've been told and teaches on Multi-threading allocation coding specifically those on hardware level (ie assembly) and sometimes to the kernel.
@arthurwulfwhite8282
@arthurwulfwhite8282 5 лет назад
Programmers aren't bein taught to code sequentially. I don't know where you got that misguided notion form. I studied in University 10 years ago and the first course we *had* to take was about functional programming. The most parallelizable paradigm. The last course I took was about multi-threaded programming. It was a very popular course. So making a blanket statement like "programmers are being taught to code sequentially" is just erroneous.
@neilbedwell7763
@neilbedwell7763 5 лет назад
That might be your experience, but aren't the majority of software development courses, even at university level, teaching OOP?
@arthurwulfwhite8282
@arthurwulfwhite8282 5 лет назад
​@@neilbedwell7763 There is normally a (single slightly mediocre) course on OOP and it's vital to understand that OOP is orthogonal to Multi-Threading. In fact, one of the popular books on Multi-Threaded Programming: "Art of Multiprocessor Programming", uses Objects in Java. There is also a single course on procedural programming. Overall, it is up to the student and there are courses in multi-threaded and distributed computing which are gaining popularity as well as Data Science and Deep Learning which are highly parallelizable. Except for AI and some Video & 3D, why would anyone want the extra computing power per-core? Gaming seems to be mostly bottle-necked by the GPU which is a processor with an even larger number of cores that are being pushed to the limit. en.wikipedia.org/wiki/List_of_Nvidia_graphics_processing_units#GeForce_20_series The reason there isn't a lot of multi-threaded code is that it's harder to write and to maintain. Universities are on board with fancy things like functional and multi-threaded programming. The industry's supply? It is growing steadily with the demand.
@AnonymousUser77254
@AnonymousUser77254 5 лет назад
Arthur Wulf White I did multithreaded processing first year, second semester of computer science. And that was "low level" C, not utilising any libraries.
@BattousaiHBr
@BattousaiHBr 5 лет назад
@@arthurwulfwhite8282 actually games are bottlenecked by the CPU a lot more often than you think, especially older ones and pretty much every MMO.
@arthurwulfwhite8282
@arthurwulfwhite8282 5 лет назад
@@BattousaiHBr I am talking high-end. The discussion is about what's possible with the best equipment today. I seriously doubt that you'll *often* get bottlenecked by the CPU available on a game. Now, I understand it depends on the resolution... but I normally see the CPU is not at 100% and the GPU is up there. Maybe I am used to high-res and ultra details? I guess the question is what settings you run the game at. In most modern games, I don't feel the CPU is the cap. Could you show some examples of being capped with a high-end CPU on some modern games? About older games... I am guessing they aren't a great example cause they probably were designed with weak GPUs in mind and GPUs improved a lot in the past few years. I don't know about MMO... I don't play them.
@RaptorJesus.
@RaptorJesus. 5 лет назад
no "but can it run Crysis" comment?! Internet! you have failed me!
@sisco5153
@sisco5153 5 лет назад
For real. Half life 3 not confirmed
@whatthef911
@whatthef911 5 лет назад
The Lost In Space TV show was set in 1997 when space travel was common for everyone.
@markeyboi6545
@markeyboi6545 5 лет назад
As many comments have stated, some information in this video is incorrect. 1) Below 5nm is currently impossible. Electrons jump between the gates, and until that is solved, 5nm is a hard limit for transistor size. 2) As a developer myself who cares strongly about performance, I can tell you there are only small subsets of a program that can be multithreaded. Programs work by doing calculations and then feeding that into the next calculation, it's not very often that you can do the next one without having the result of the first one. Which also means that the "hardware/kernel" level automatic multithreader sounds like BS. This is something we would have to see to believe, not just a demo video that could easily be BS. Most workloads can never be more than 50% multithreaded, so it will be interesting to see.
@certaindeath7776
@certaindeath7776 5 лет назад
we will be able to play arma3 with 144fps somewhere in the future? *wet child eyes*
@GuyFromJupiter
@GuyFromJupiter 5 лет назад
Lol, yeah that would take some serious silicon sorcery. I really hope it happens, but I, with my admittedly very limited knowledge, am pretty skeptical of this actually becoming reality anytime soon.
@Muratcharms
@Muratcharms 5 лет назад
As far as i understand these inovations more like auto optimize...but still doesnt solve dependency problem between executions, wich is actualy logically fundemental thing when you try to do something. This Ai... hmm... maybe igpu can accelerate it but then you will face some latency... well ai calculations alone gonna cause some extra latency wich point can only be compensates itself if the work load alone takes more time ...this is like the story behind multi core/frequency limit... sometimes there will be negative scaling i gues...
@laurelsporter4569
@laurelsporter4569 5 лет назад
That's why we're still using CPUs. "AI" as seen in GPU hardware is mostly machine learning working on images and other highly parallel data sets. AI in the sense of code that makes decisions for behaviors doesn't usually fit the massive vector GPU style of processing. Translating code in real time, like with Transmeta's CPUs, and VISC as in this video, takes too much time, and memory bandwidth, for a lot of code. In some paths, a few cycles here and there really matter. Splitting up processing also adds latency between the actual processors that it's split up amongst. Not that it has no promise, but it's not going to take over and revolutionize anything. Work that is not bursty, like graphics, audio, and non-trivial database processing, could benefit from those kinds of middleman optimizing systems quite a bit.
@kazedcat
@kazedcat 5 лет назад
You predict dependency and just reroll when prediction is wrong.
@laurelsporter4569
@laurelsporter4569 5 лет назад
@@kazedcat that's what hardware already does. X86 CPUs have been doing that since the Pentium. To do it better either requires a lot more power, or sacrifices in latency - sensitive code. Now, if you could predict ahead of time what that code is, and profile it while running it as-is, only changing it based on current state in later iterations *if* it makes sense to do so, there might be some real potential, there. But, that's far from trivial, and going to require a lot of statistical processing (the marketing buzzword "AI”).
@kazedcat
@kazedcat 5 лет назад
@@laurelsporter4569 You could analyze the code while it is running and just save a multithreaded translation for the next run. So the first run of the program will have no multicore acceleration. The second time you run it it will have acceleration. The more you run the program the more confident the prediction is and the faster the application will run. This will make use of massive multicore by first running the analyses on spare cores. Then use more cores to accelerate application. You could even make the analyses during installation of the program so that the first time you run it it is already accelerated.
@laurelsporter4569
@laurelsporter4569 5 лет назад
@@kazedcat That will be too high cost between the cores, though. The original code will have to have been written with something like that on mind, else it will have created dependencies that keep that from working. Such profiling and recompiling must still make the code act exactly as written. You can't just pull more threads out. Micro threads, threadless, etc., will still end up with heavy memory and power costs. I'm sure that what they're doing is quite a bit more subtle.
@N0N0111
@N0N0111 4 года назад
@coreteks When will you tell us more about the single thread to multi threads software, that works on the kernel level?
@amrohendawi6007
@amrohendawi6007 5 лет назад
cant imagine better tech reviews on youtube !! respect for u sir !!
@kennethdarlington
@kennethdarlington 5 лет назад
Performance will be bottlenecked by I/O then! Oh, wait...
@davidtorto9517
@davidtorto9517 5 лет назад
No, I think ssd drives and thunderbolt can hold this.
@antoniobortoni
@antoniobortoni 5 лет назад
How cannot be optimistic about the future?? The future looks bright and powerful.
@mathieumansire372
@mathieumansire372 5 лет назад
in what context? technological future is bright , biological future not so much .....at this point I believe A.I is the future , not humans
@TheFootballstar5588
@TheFootballstar5588 5 лет назад
I just want my games to run better lol
@mathieumansire372
@mathieumansire372 5 лет назад
@@TheFootballstar5588 lol yeah that will happen , one day you will be part of the matrix (maybe you are already)
@diskob0
@diskob0 5 лет назад
I like the music you use, can you tell me what tracks you used?
@MichaelBartonMTS
@MichaelBartonMTS 5 лет назад
Good stuff, Keep it up!
@ozarkfive2519
@ozarkfive2519 5 лет назад
And this.. is to go... even further. beyond... *Yells in Super Sayian*
@szalor06
@szalor06 5 лет назад
Haven't even watched yet but already liked it
@andrewmcfarland57
@andrewmcfarland57 5 лет назад
Every time I think i'm too old and cynical to give a sh*t about tech developments anymore, Coreteks puts out another video like this and suddenly i find myself on an all-night Red Bull & research binge. GIVE THIS GUY MONEY.
@Ay-xq7mj
@Ay-xq7mj 5 лет назад
These predictions are gold with foresight.
@jonmichaelgalindo
@jonmichaelgalindo 5 лет назад
More cores = more unrelated tasks in parallel. That has no limit.
@VegetoStevieD
@VegetoStevieD 5 лет назад
How advance will memes in 2030 be?
@marselshtylla
@marselshtylla 5 лет назад
they will have 2 sentences the same and one of them will have bad grammar :D
@nekroneko
@nekroneko 5 лет назад
They will beam them memes to your dreams
@pf2611
@pf2611 5 лет назад
This is really imformative you should have more subs.
5 лет назад
What a professor!!! Keep the good videos! Greetings from Portugal
@LordAlacorn
@LordAlacorn 5 лет назад
No need to wait 2 years, to joke about you. ;) There won't be 3nm or 1nm - transistors are 6 atoms wide right now and future shrink is impossible do the Quantum Tunneling. That's why we see 2.5D and 3D chip stacking.
@atticusbeachy3707
@atticusbeachy3707 5 лет назад
An atom is around 1 Angstrom or 0.1 nm in diameter. The current 7 nm chips are 70 atoms wide. Not 6. Edit: Silicon atoms are closer to 0.2 nm in diameter, so that would be around 35 atoms wide.
@jeoffreyauscia6841
@jeoffreyauscia6841 5 лет назад
it would be possible to switch to entangling particles rather than transistors to shrink the size and achieve more power, however the scientist responsible for it is not yet born. Tesla's wireless energy processing can also be considered
@LordAlacorn
@LordAlacorn 5 лет назад
@@atticusbeachy3707 What Intel calls a transistor is a part without a gate - gate actually increased in size to counter quantum effects.
@matthewvandeventer3632
@matthewvandeventer3632 5 лет назад
Honestly 4 cores and 3.5 GHz running in parallel makes 14 GHz. so i guess it was pretty accurate kind of.
@dire_prism
@dire_prism 5 лет назад
Extra cache space will definitely help a lot. In particular for applications coded with multiprocessing in mind, with as little as possible data sharing between threads. As it stands now, even if you code perfectly parallelizable applications and eliminate all data sharing including false sharing the threads still compete for cache space.
@flukymaze
@flukymaze 5 лет назад
this is the most resourceful, knowledgable and learning curve ive ever received in my whole life...not even joking...it just makes so much sense and fits into the state of the technology insdustry its just insane how good this video was...
Далее
Intel is DEAD
25:41
Просмотров 207 тыс.
RDNA vs TURING - The END of NVidia’s dominance
30:24
Просмотров 202 тыс.
Goodbye x86. The FUTURE is RISC-V
23:49
Просмотров 787 тыс.
The FUTURE of GPUs: PCM
18:47
Просмотров 28 тыс.
CPU? GPU? This new ARM chip is BOTH
20:36
Просмотров 253 тыс.
The GPUs of the Future
12:46
Просмотров 446 тыс.
Why did you make me build this?
19:11
Просмотров 797 тыс.
Intel is in serious trouble. ARM is the Future.
25:04
APPLE совершила РЕВОЛЮЦИЮ!
0:39
Просмотров 3,9 млн
Спидран по ПК
0:57
Просмотров 32 тыс.