OOF. I don't feel sorry for Intel. They got complacent and bloated. And you just know... Intel executives were celebrating the imminent collapse of AMD as a meaningful competitor.
GoodGamer3000 AMD is gone, TSMC now. Weirdo channel, what did he meant, nerdy humor. Intel is the only party still producing! he just reads weirdo text.....unable to understand it......or run anything on them, 200% weirdo dude!!!!!!!!! That sound..........grave voice it is......
@@blitzwing1 PC space isn't going anywhere. Sure the kiddies might all be stuck on their smart phones and tablets but when is the last time that you went to work and they issued you a tablet instead of a PC?
The year is 2103... *Child leafing curiously through an old computer book... "Grandpa, what was a Intel?" *Grandpa smiles knowingly "It was a company in the early days of computing. I think they made network cards, and really bad graphics cards..."
RISC is basically a mips with simple machine instructions available. which means he can do much more instructions than CISC, but each code will also be translated to much more machine instructions. CISC is a complex instruction mips, that is in many times more efficient. There are much further details that this shitty channel doesn't target or known. An electrical engineer here. Flies away
@@idojac6738 Not really. MIPS stands for Microprocessor without Interlocked Pipelined Stages, which is a type of RISC (reduced instruction set computer) ISA (Instruction set architecture). MIPS/ARM architecture is RISC while Intel x86 architecture is CISC (Complex instruction set computer). RISC and CISC refer to their types, therefore 'RISC is basically a mips' or 'CISC is a complex instruction mips' is wrong. And RISC V mentioned by the OP is a RISC ISA open-source project started by UC Berkeley. Also, you can't say that CISC is not more efficient. It depends on the implementation really.
Intel has been so greedy in the past, especially when AMD stopped being a threat............... so happy AMD is making big moves, will be going with AMD for my next desktop
Samuel Aubrey AMD is gone, TSMC is producing, empty shell company now... Greedy, intel??? They still are producing cheap for gaming k CPU's!!!!!! Why you need this grave guy, he is not understanding, just mumbling weirdo shit..... Why you say happy, Radeon 7 good? Not needing good?
@@dirtkiller23 playing devil's advocate here: a poor command of the English language (which lucas rem clearly displays) does not equate to lack of intelligence.
"If intelligence was a cake, unsupervised learning would be the cake, supervised learning would be the icing on the cake, and reinforcement learning would be the cherry on the cake. We know how to make the icing and the cherry, but we don’t know how to make the cake." -Yann LeCun
Unsupervised learning has actually been possible for a while now. Or at least kind of. You basically make one Neural Network that checks how good results of the actual Natural Network are. Look it up, there's some really great results researchers have been able to achieve with this. Edit: it's called GAN (generative adversarial networks)
They say that those who fail to learn from history are destined to repeat it. Understaffed, under resourced and overworked engineering team is one of the factors that was involved in the demise of two of the companies that were amongst the early pioneers in this industry - Atari (yes, really) and Commodore. All things considered, it's amazing what their teams managed to produce, but management were the limiting factor in both cases, leading to the eventual demise and reduction of both companies to a footnote, companies that are now not even known or remembered by the general public.
You know how it is at these big companies. Management sending sales people out everywhere and they come back with requests from clients, so then management throws all those requests at the research department. In the end they have many types of products in many markets, but are leader in none to few.
IMO AMD is going to stick with current model names and core count and Ryzen 3600 will remain 6C/12T and the Model range will be: RYZEN 5 3600/X 6C/12T RYZEN 7 3700/X 8C/16T RYZEN 9 3800/X 12C/24T RYZEN 9 3900/X 16C/32T
Time to watch my favorite tech-ASMR channel again Edit: But for real though, @Coreteks your videos are the best in youtube so keep up the good work!!! and please make more videos on the future of A.I. we're all dying to know
Tetsuya AMD = TSMC now, intel the only party that is still producing! what is this, nerdy weird humor show? Lithographic CPU producing, ASML and intel! They own the market!
@@officalextendonecktm4292 Yea I'm in the same boat, can't tell if he's saying Jim is a poor analyst or if he's saying Jim is one of the few good ones. Could be taken either way.
@@PulseCrisisMusic I read it as that he's a good analyst. Very few saw the advantages of MCM design so early on, and attempted to tell others. His videos on MCMs, especially the one regarding the use of active interposers, are truly excellent highly technical content. Sure, he's just reviewing a couple of journal articles, but the public generally doesn't know to look for this stuff, and/or often can't really understand it when they do. It's totally my jam, but I know I'm pretty atypical this way; this shit is super cool so any attempt to popularise it is very welcome by me!
It's kind of bad that AMD is going to obliterate Intel because we're just going to get another monopoly pretty quickly. It's likely that Ryzen 3000 will be the last decent priced AMD cpus.
Ruohong Zhao intel is the last party still producing, rest is taken over by the Asian skum, outsourced..... AMD, Nvidia, not able to produce, empty shell companies..... Monopoly, you noobs only need Nvidia here, the only party that is still able to develop GPU computing....
I though you gonna talk about new found hyper threading issue. But you came with the other thing that i really didnt know. Good video, and great info. Thanks
I'll be switching from Intel i7 to Ryzen 7 or 9 when I come to upgrade my CPU. The security issues and the price/performance ratios are just not in Intel's favour.
Seems like this should dovetail nicely with another suggestion from others - to have a look at RISC-V. Why? Because it's designed to support vendor-defined instruction set extensions. So the vendor can define some domain-specific instructions that process data through specialised hardware blocks - say, crypto or AI or other computationally-expensive operations, which would otherwise be implemented as many consecutive general purpose instructions but which can be performed by said block(s) in only a few cycles (or even a single cycle!)
"Intel is dead"... Not at all likely. People said the same thing back during the Intel Netburst years and then people said the same of AMD after Bulldozer and prior to Zen. Folks thought nVIDIA were dead back in the GeForceFX days and many counted ATi out after the 2900XTX. It's the same cycle repeating itself. Intel are working on technologies not that dissimilar to those AMD employed with Zen (moving towards chiplets). For now, Intel will suffer from losses but just remember how many losses in a row AMD took before returning to profit. In other words... we're a long ways away from being able to claim "Intel is dead".
Richard La Rose Not only that, VIA and AMD basically just handed their IP over to China... That would be like DEC handing their IP over to the soviets. Worse actually, because the US didn't trade with the soviets. Look at every company that has done the China product market move. They get burned, China steals their IP and makes a crappy version of it and then the company in question has to go pound sand because they have no recourse. Intel is smart to retain their own fabs even if it means delaying the newer technology. That's just one reason among _many_ to laugh at this video. These Euros never seem to understand tech history or how markets work(not all Euros...just a fair number), yet these same people are so pompous and arrogant toward everyone else. Characterizing everyone as idiots and/or uneducated and the like... It's astonishing, really. Anyway, We two can laugh at this video in 20 years while he(the video maker) pretends he never made it; laughs that anyone remembers he said this.
You're missing the bigger picture and whole analysis aspect. One single bad product or product line doesn't render a company dead. Intel was still outselling AMD in the Athlon/Netburst days because of bribes on multiple levels and brand familiarity. I'm pretty sure Nvidia still sold well with their FX series. And in any case, they weren't all that bad. And they also had more or less good products in development. Just 6 months after the 2900 XT came the 3000 series which was great. The thing with Intel now is that they have serious manufacturing issues that have been going on for years, and they have nothing new coming in the near future. Looking at their roadmaps and statements it gives you the impression that they don't really have a plan or even a strategy. @Peter Lamont There's a middle ground between giving everting to China and keeping all fabs (en)closed. I'm doubtful of the fruitfulness of the China deals, but even so, the future looks very bleach for Intel and very bright for AMD.
@@snetmotnosrorb3946 Yes, manufacturing 10nm monolithic dies is very hard but Intel also has 3D stacking with Ice Lake. Meaning that not every "chiplet" on there is going to be 10nm. Just like AMD are doing but Intel is going to be also stacking the chips atop one another (like the HBM design from AMD+Micron). I expect Intel to be back in the race in 1-2 years time.
But how? Will they have fixed their 10 nm by then? It looks pretty dead still. Or will they suddenly have 7 nm then? We haven't really heard anything about that. Will they have a new architecture? There are no concrete rumours AFAIK. And if you look at their (leaked) roadmap they will have nothing new this year and pretty much nothing new next year. A 10 core with the same arc and same node, so even more expensive to make. Unless they ditch their integrated graphics. Possibly something 10 nm for laptops and servers, but only in limited quantities still. The stacking, what's that gonna be on, 14 nm? That wont save them much. You're just assuming that they have stuff and will come back soon because you cannot grasp that such a large and wealthy company can fail beyond recoverability. Well look at Sony. Sure, they're not dead, but it's a completely different company from 10 years ago. Back then they were still a large manufacturing company and a pioneer in home electronics. Today they're mostly a brand.
@@snetmotnosrorb3946 Oh, here's the thing... Intel's 10nm is a smaller process than TSMCs 7nm. So it's not an apples to apples comparison. From what I heard, Intel is just about ready to release their 10nm starting December 2019/January 2020 time frame. The first 10nm CPUs are going to be released on a Dell Gaming System I read. Intel is late, very late. To the party. For now AMD enjoys a comfortable and well deserved win.
Excellent Video Coreteks. It truly is mindblowing how much the greater public is missing the spiral Intel is heading down. Actually, what's truly scary for me is that it seems like people actually forgot that 5% performance increases per year aren't a big deal. People actually got used to that, and would have completely stopped questioning it if Ryzen didn't come out when it did...
Would definitely love a video on domain-specific accelerators and fixed-function blocks in hardware. If there's someone who can get state-of-the-art tech through to the commoners like us, it's you.
May's Law states, in reference to Moore's Law: “Software efficiency halves every 18 months, compensating Moore's Law.” David May (born 24 February 1951) is a British computer scientist. He is a Professor in the Department of Computer Science at the University of Bristol and founder of XMOS Semiconductor, serving until February 2014 as the chief technology officer. May was lead architect for the transputer. As of 2017, he holds 56 patents, all in microprocessors and multi-processing.
I sort of feel bad for Intel. Literally everything is working against them right now. They had a monopoly for too long and got sloppy and now they're reaping what they sowed. They've lost both consumer and business support.
These videos are really higher quality than anything else on RU-vid... I was just thinking if you would find the idea of doing a podcast interesting? Personally, I would love to listen to it and I'm sure most people would too!
Thanks for the suggestion. I'm considering going full-time youtube and if I do that's one thing I've thought about doing. I think by the end of this year I might have enough patreon support to focus on the channel exclusively, we'll see how things go!
I'm sure Intel saw the writing on the wall and that's why they started drastically hiring all of these brilliant minds and collecting all the experts they could in order to save themselves from crashing. I wonder if it will pay off and these brilliant minds will produce something significant that you're currently unaware of. Some kind of ace up their sleeve in the works
Having an all star team does not quarantee a good outcome if the coach has no idea what they're doing and manages this talent poorly. This can be said about many companies. Like EA for example. They have awesome tech, great developers but their leaders push short term monetization agendas that will end up costing them in the long run. You can fool people only so many times with flashy names and terms before they go elsewhere.
Unless they have very good and strong leadership or some lead on new industry changing technology that makes everything before it obsolete to focus big names with big egos it is very hard keep such a team on the same page. Which could explain the newly segmented roadmap from intel. This could lead to more problems for intel then it solves.
YouVSMeTV No AMD just killed Intel with their new mobile CPUs The new Ryzen 9 mobile CPU is about 25% faster than the i9 9980HS or what its name was So basically Intel lost in CPU market Intel lost in mobile market Intel won’t be good in GPU market Intel lost in the Storage Market with their overpriced Optane Intel is quite dead
@@kimjeyong2474 yeah they did. They are also the hearts of the next console generation, are competing against nvidia and Intel still hasnt made a definitive comeback.
I drive by their ronler acres plant every day. They are doing construction, like HUGE billions of $$ of construction expanding, so... I don't think they are "dead" provided Washington county continues to hand them any/all tax breaks and deferments and provided they can continue to extort the h1b visa program to bring in intellectual labor on the cheap. Here ya go: www.oregonlive.com/silicon-forest/2019/02/intel-confirms-plans-for-huge-oregon-factory.html
07wrxtr1 they’re dead insofar as their products are all tanking. Their entire line of processors are all losing almost 50% of their performance capability due to the vulnerabilities. So a 8 core 4.4GHz intel processor may as well be a 4 core 2GHz processor.
I see a problem in the future with AI research as by moving away from GPUs and CPUs we gain a performance benefit now. However all this investment in asics will be dumped as soon as a new algorithm comes along like MLNs, which do not perform repeated matrix computations. Also by investing so heavily in ML asics the bigger companies like Google will stop exploring more interesting AI architectures and direct all of their focus on Neural Networks.
And the shot before that showing the Chiang Kai-shek Memorial Hall. Serious mistakes considering the very specific and key role Taiwan plays in this industry.
Why use POWER when there is RISC-V? I'm sure it has its place, much as it is currently used, but I like the look of RISC-V much more. POWER is rather CISC-y and not significantly more attractive than x86 other than the licensing situation. It's great that it's gone open, but I can't help but think it's too little too late.
Expect Intel to revert to their vile, vicious, nefarious, diabolical, underhanded (and illegal) "restraint-of-trade" type deals to thwart AMD. I think it is reasonable to expect this to be a bit more difficult this time. But Intel has huge mountains of cash, and very few individuals or companies can restrain themselves from turning away from highly lucrative bad actions.
Very good software for creating chips already exists, I don't know if AI would improve on this. As mentioned in the video, the biggest improvements are in process nodes not the chip design itself.
Thanks for the video. Though I have to say I was surprised to hear you say that there is a growing gap between processor and memory performance. What I've seen over the last, say, 10 years is the exact opposite. Back then processors were already at around 3GHz - close to as fast, at least in terms of raw clock speed, as they would ever get. Whereas memory was running at 800MHz tops. What with the advent of DDR4 and memory speeds now in excess of 3GHz, that gap has been nearly eliminated. Also, since NVMe SSDs have plunged in price, they've mostly replaced rotational media, at least for non-archival storage, we've seen a significant narrowing of the gap there too. Which have been two areas of good news in the otherwise bleak semiconductor landscape.
I've long hoped that the declining pace of performance increases gives chip-makers the impetus to solve the difficulties in combining processing elements with RAM into Computational RAM (C-RAM) as research projects have shown tantalizing potential. A kind of proto-SOC combining CPU and RAM called the Transputer debuted in 1984, it was intended for parallel processing and far ahead of its time; fun fact: T2 was the first series in the Transputer line and one of the last chips was the T-800.
What does Research in Motion, Nokia, Palm, and HTC have in common? When you become so complacent by resting on your laurels and stop looking ahead of the competition this is what happens. Intel has abused it dominance, oh how the might has fallen.
As a programmer, I really don't know how they're going to automate the single-core to multi-core threading. I can understand that maybe there'll be support for distributing loops over multiple cores, but there is a large overhead in transferring data between cores; especially on windows. The best I can see happen is support for marking specific loops as 'split if possible', but this is already done with CUDA.
Lascall Othello memory, 1968? why you need this weirdo? architecture analysis? why you need this weirdo for that, what do you do here??? running what????
As we reach the limits of scaling the *only* path forward is optimization and while returns diminish with optimization I don’t think there is a single useful algorithm known to be perfectly optimal. There will always be opportunity for acceleration and having these improvements baked into silicon will probably make IP easier to control.
A finite number of ways to achieve these computations is a very large number. There are also a finite number of ways to arrange letters in a story however people will continue sell original literature for a millennia to come. I am not convinced that heavily researched for 50 years means “all algorithms are optimal we are finished.”
@@alexvornoffice Laptop buyers don't have viable choices yet. CPU wise, desperately waiting for Ryzens. GPU wise, Navi is said to be a nightmare, so I'm not holding my breath, though I hope they end up competitive with NVidia. For desktops, if it were my money, AMD Ryzens are the only rational choice.
You didn't even watch the video. There was no time for that. It's slaves like you (and the slow mammoths of datacenter) who keep Intel on life support.
You present a very compelling and informed argument. It's overdue; Intel has been milking their products and customers for decades, no tears will be shed from me.
Phrased a little differently 20:14 to 20:20 felt like it would have been the ultimate cliffhanger for another video ;). I liked the way you wraped up the former topic and then going into this next one. Is there some kind of theory for this style of writing that you have been taught in school or something, or is this something you developed on your own? I dont do any content writing or something, still I am generaly curious because this just grabbed my attention so well.
AMD's consistent innovation has been the driving force that's kept them alive since the 70s. Even when there was a slump in chip development in the mid 80s, they chose to try to develop more logic chips, and their company history is one of trying to innovate especially when times get rough for them. I hope this innovation business model keeps driving them forward. Their Ryzen processor shook up the market enough to fuck with Intel's dominance in the gaming market, and with Epyc they're pushing Intel out of the data center. And we've even got Ryzen mobile coming with integrated vega graphics that don't seem like they're too bad.
Where can I find the videos that we see in the b-role? I would like to see short presentation videos explaining how wafers are made, and prepared for before the lithography process.
So for a video, you could do the ARM architecture, and why it's so power effecient, and why it's used in laptops, but not desktops. You could go over the difference between ARM mcus and ARM mpus. The way ARM handles multithreading vs x86 processors, and why it's better for real time processing. RISC vs CISC, pipelining, and ARM caching. I'm sure you'll do something on ARM in the future, just some things to include.
Is there some reason we can't have bigger dies? I mean, still the same node but... double the physical size, double the transistors? I realize heat might be the limit there, making custom watercooling loops mandatory but it would seem to be the logical answer for creating one more push to get high-end desktops to hit high framerates at 4K in AAA titles. This is more a GPU issue than CPU but i think the concept is applicable to both.
“Intel will retain some level of dominance in the laptop space” - Renoir says “hi” from 2020. Great video, pretty spot on even a year later. The information about memory and how Intel refuses to push more aggresively in this segment really is baffling.
AMD was almost dead a few years ago too. I don't think Intel will die. They will simply fall behind for a few years and that's very good for consumers. I was very tired of Intel's monopoly on the CPU market. Especially their pricing practices.
amd was forced into that position by dubious business practices by intel. Like paying Dell to not use any amd products in the 90's in the hundreds of mill per quarterly earnings. adored did a great vid on intel's bad business practices, hell even this year they tryed pulling a amd can't supply the demand bit in a tech meet. shameless as ever.
The "look up table" style of computing has been known for a very long time and sort of comes and goes in cycles. A look up table based state machine can be extremely efficient for a given task. If the table(s) are in RAM then the same hardware can fairy easily be targeted to a new task by changing the table's contents. Moving into the 3rd dimension even in a limited way can really speed stuff up if you can get the heat out. Diamond and silicon-carbide are better at conducting the heat away.
Amazing videos, but I feel like the time at which it is uploaded is hindering the full view count potential. Of course, you have the analytics of where the viewers are located, however I just can't help but feel like your amazing work should be set free.
Neural networks algorithms were developed between 1943 and 1946, the paper of Lecun you are referring too 1986 is how using gradient to accelerate computation.
AMD Ryzen 3000 with a memory bandwidth of 51.2 GB/s "wow" almost ten years later and they catch up with the i7 3960x, color me not impressed, people usually overlook the bottleneck points.
My speculation is Intel will try to do what ARM was trying to pull off back in 1997-1999. Only they'll use Accelerators as opposed to ARM using chipsets and cpu sockets.
Seems like flexible SIMD, DSP, and GPGPU units as close to high bandwidth memory will be the goal. Something like HMC or HBM, but cheaper, or massive directly addressable eDRAM (let the OS know the address range, and go from there). That, or a major DRAM vendor developing a RAM process with integrated SoCs in mind (eDRAM in reverse, basically).