Тёмный

The End of Big Silicon - Apple, Intel, Qualcomm 

Rene Ritchie
Подписаться 320 тыс.
Просмотров 116 тыс.
50% 1

🔥 Visit audible.com/reneritchie to try Audible free for 30 days!
Both desktop and mobile silicon are hitting the limits of physics. Power draw keeps increasing while enclosures only get smaller. Process shrinks are becoming more expensive and time-consuming. And foundries are facing geopolitical and competitive concerns. So what can Apple, Qualcomm, Intel, and all the rest do to keep improving performance?
Ben Bajarin: / benbajarin
creativestrategies.com
🔔 SUBSCRIBE ru-vid.com?s...
🔗 LINKS
🗂 CHAPTERS
🚨 ETHICS & DISCLAIMER
All opinions are my own. This channel does not produce sponsored or paid reviews. Companies occasionally provide briefings or loan sample products to facilitate reviews but provide no payment and get no editorial input, content approval, or advanced previews. They see them for the first time when you do
Links may contain referrals for affiliate programs that provide this channel with a tiny commission should you make a purchase. They likewise receive zero editorial input or consideration
📝 CREDITS
📷 Some video and images via by Getty Images and/or AP Archives
🎸 Some music via by Epidemic

Наука

Опубликовано:

 

3 дек 2022

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 167   
@randocalrissian9217
@randocalrissian9217 Год назад
Love your videos especially talks like this one. I do have a bit of feedback, please introduce your guest/segment instead of jumping directly to a question to an off-screen, unknown-to-the-viewer guest.
@georgewashington3012
@georgewashington3012 Год назад
Introductions bore me and waste my time. Name and title written on screen is sufficient but I didn’t see it here.
@ballintuim
@ballintuim Год назад
I’d agree with this. Difficult to tell how much weight to put behind this guys opinion when he don’t know who he is. Great video though
@shaunradford6154
@shaunradford6154 Год назад
Does seem like a weird choice interviewing someone and not identifying them.
@DarranGange
@DarranGange Год назад
You’re right, real strange.
@ariliquin
@ariliquin Год назад
Totally, why should I be interested in what “this guy” has to say, really need to know who I’m listening to and why. Take a look at Dr Ian Cutress and his channel, he does this really well. Also marketing names are not physics, this is why thermal envelopes impacted and seems a problem, if they had really delivered 3nm and angstrom then thermals would be addressed, density without reduction in size and friction = heat increase.
@matthewstott3493
@matthewstott3493 Год назад
Intel / AMD was stalled with x86_64 and backwards compatibility. Apple merely scaled the designs from the iPhone SoC designs which are more ARM clones than actual ARM licensed designs. Even if Intel / AMD considered ARM and doing what Apple's done. It would produce non-compatible processors. Apple's secret weapon since Apple bought NeXT was the NeXTStep / OpenStep that evolved into Mac OS X and the developer API's. Developers if they didn't do a lot of weird stuff in their code were able to port from macOS to iOS/iPadOS on ARM64 with very little effort. In some cases, just checking a box and making some GUI changes. It doesn't work like that with Microsoft. It takes a lot more effort. Linux has always been running on most every CPU architecture out there and there is already a Linux distro booting and running on Apple Silicon. Apple still has a big head start and you have to understand Apple has internal Apple Silicon designs perhaps 5 years into the future they are developing. Some of which are waiting for FABs to be ready. Because Apple controls the OS and the Developer API's they can deliver radical change far easier than anyone else. No doubt Intel, AMD, Qualcomm, etc. will be attempting to change rapidly. The next 5-10 years won't be boring that is for sure.
@dinozaurpickupline4221
@dinozaurpickupline4221 Год назад
I love apple & linux
@sylviam6535
@sylviam6535 Год назад
The Linux software ecosystem is mostly open source, so everything can be recompiled. Windows has to be able to run a large body of closed source x86 legacy code.
@Tech-geeky
@Tech-geeky 9 месяцев назад
Intel won't... They can't do the same, in fear of breaking backward compatibility.... So much more x86 apps break under Windows 11 ARM translation, and despite Microsoft's effort, there are no really any signs of improvements.. Apple is way ahead with Rosetta 2... besides, how do you solve the issue most manufactures will not go over at all ? Drivers? chipset for motherboard by third parties like Gigabyte. You'll break too much trying to making everyone work together.. Apple can do it because their the they do it all.. True, you do have Surface tablets use ARM, but not in mass.. PC's (x86) are still used in businesses. and you can say goodbye to running ARM versions of Sophos or ESXi in a VM on ARM chips. That's not gonna happen. Perhaps that may change, but its gonna take a very long time.. if happens at all.
@utubekullanicisi
@utubekullanicisi Год назад
The reason AMD has adopted the chiplet architecture for their CPUs and are starting to adopt it for their GPUs is that newer process nodes don't shrink down the size of memory cells and analog I/O as much as logic cells anymore, while the cost for every mm2 of the die stays the same regardless of how much of their area budget chip designers spent on logic vs. memory vs. analog. So AMD chooses to use flagship process nodes for the blocks that do the actual computation like CPU/GPU compute units, AI accelerators, display engines, media engines, etc. and TSMC's more value oriented 6nm node for memory/SRAM, where memory cell density is not too far away from what it would be on 5nm, but the price is much less expensive. It's not magic, however. Contrary to popular belief your yield rate doesn't increase when you take a single flagship process node die and slice it up into 2 pieces, the price stays the same, and you now have to accomodate for the die space that the interconnects that connect the two dies together will occupy as well. The value of chiplets is being able to mix and match more expensive and cheaper process nodes for different blocks of the chip. Keep in mind, 5nm still *is* better for memory and analog even if slightly, and the slight difference is only in terms of transistor density, there are frequency uplifts as well and stronger arguments to be made for the power reduction of newer nodes. That said, the power reductions are only relative to the power draw of memory cells on 6nm, not relative to the entire chip, where the compute units are responsible for 80-90% of the power draw. The reason Apple also uses a chiplet architecture for the M1 Ultra is because they figured being able to sell exactly half of the M1 Ultra's hardware would be useful, and doing that not by binning down a die that is at the reticle limits of TSMC's 5nm node (a theoretical monolithic M1 Ultra, if that was even possible) but rather making the M1 Ultra from two smaller dies and selling only one of those dies in cheaper and smaller products would be more suitable and economial. And they were able to create another chip SKU from the M1 Max as well, the M1 Pro, which is just the M1 Max with the bottom half of its GPU portion lasered off. There's actually only 2 separate dies that create the whole M1 family, the M1 (codenamed Tonga) and M1 Max (Jade-C Die). In short, companies that have the luxury to afford it with stronger market positions like Nvidia and Apple could stick with traditional monolithic architectures for most of their lineup a while longer to get those diminishing performance and efficiency gains, and only using the chiplet architecture to create multiple chip SKUs out of a single die or in cases where they're at TSMC's reticle size limits, while AMD settles for a little bit less performance in exchange of much higher performance/$.
@marcochavanne
@marcochavanne Год назад
This guy chips
@Chalisque
@Chalisque Год назад
Not sure I get the yield argument. Roughly, there is a fixed probability of a defect per mm². Thus larger dies have a larger probability of a defect. If you have one 100mm² die, then a single defect kills all 100mm². If you have four 25mm² dies in the same space, and a single defect, you still have 3 working dies out of 4. Obviously this is oversimplified. Mixing and matching is the other advantage.
@Loanshark753
@Loanshark753 5 месяцев назад
Another advantage is that less space is wasted on incomplete edge dies.
@EnochGitongaKimathi
@EnochGitongaKimathi Год назад
Isn’t Apple already doing chip-lets with the M1 Ultra?
@mrcommaker
@mrcommaker Год назад
Very interesting! Many things kinda block quick development in tech but I guess that every company finds their way through. That‘ll bring competition and to us new products.
@melgross
@melgross Год назад
Ben did make one error here. Qualcomm still uses standard ARM cores. They tried their own a few years ago, but it didn’t work out.
@bbajarin
@bbajarin Год назад
I did point out they were custom then semi-custom and now going back to custom with Nuvia in 2023.
@tangierclarke8499
@tangierclarke8499 Год назад
Hey René. Could you discuss at some point how the technology in IBM's light speed chip could fit (if at all) into this conversation?
@RealJoseph123
@RealJoseph123 Год назад
Ten minutes in, great conversation! We do miss your videos, Rene! 🙌
@jeffhale1189
@jeffhale1189 Год назад
I enjoyed the interview. Blessings on your day!
@drizzzzz
@drizzzzz Год назад
Love this Rene, thank you!!
@ggioja
@ggioja Год назад
Very much appreciated this video. Thanks!
@agexax13
@agexax13 Год назад
Another Great Conversation!
@MTerrance
@MTerrance Год назад
Just saw you on Tech News (Twit). After 3 days it had 5.7K views. Your video has over 45k views in 3 days. How times have changed.
@SavageScientist
@SavageScientist Год назад
interesting comversation, i always wonder if there a theoretical limit on how large a single die can be, imagine a modern 4 nanometer chip the size of 10 486DX processors, yes big but can still fit inside a modern gaming PC and would have more power than any modern computer. Yes, it would probably cost alot to produce but its only a scaling issue not development as we already reached 4nm capabilities.
@sylviam6535
@sylviam6535 Год назад
I’m pretty sure dies will get larger as it becomes harder and harder to shrink the nodes.
@a.m.g.r7804
@a.m.g.r7804 Год назад
Im reaching a point of limiting my CPU & GPU performance by undervolting/capping GHZ so I can get more efficiency/stability, this allows me to have pervious gen top performance but taking advantage of a new node to gain more efficiency, it works wonders on laptops. So I would say, buying the latest to get top performance of a previous gen at half power, is what motivates me to ‘upgrade’
@BackpackandGear
@BackpackandGear Год назад
Also, companies like IBM are doing research on new technologies like Optical Computing and are expanding their Quantum Computing. Some technologies go away and are replaced by other, faster and lower cost technologies!
@sylviam6535
@sylviam6535 Год назад
They were already playing around with optical processors back in the 1990s, but they never became mainstream. Maybe they will when we get to the physical limits of the current model.
@JasonTaylor-po5xc
@JasonTaylor-po5xc Год назад
At some point, it will be harder and harder to make chips faster, instead we need to also make the rest of the system faster as well - so even if the CPU isn't 20% faster - the overall system will be. We are starting to see some of that with PCI 5-6 and the move from SATA to PCIe based NVMe drives.
@jank9525
@jank9525 Год назад
the one that slowing down the system is the bloated os
@T.On.Things
@T.On.Things Год назад
Thanks for sharing. However, do you think that the tech has gone so advanced now but in real life and among consumers, the power of an iPhone X/XS should be enough for what they regularly need now? We have not figured out yet if there is a real economic benefits coming out of these wonderful innovations, as far as the regular consumers are concerned. IPad pros continue to be too advanced for how it is being used now.
@mirrorlessthesis3599
@mirrorlessthesis3599 Год назад
Like the other Steve said, it's about the software efficiency.... not the hardware. That was said so many decades.
@leel0000
@leel0000 Год назад
Love your video. Want to correct your host that 2 nanometer is not made in China but in Taiwan. The Taiwanese and US government will not allow TSMC to build their 2 nanometer in China.
@bbajarin
@bbajarin Год назад
yes I meant Taiwan not China obviously. Sometimes my mouth gets ahead of my brain lol.
@jasonweaver3629
@jasonweaver3629 Год назад
Ahh I miss your videos, I hope the new focus is going well :-)
@yes-vy6bn
@yes-vy6bn Год назад
1:15 this increase in R&D spend is just 6%-10% inflation over 12 years (CPI is mostly fake bc they change the products they calculate the inflation on each year, so when you look at the inflation and calculate it using the old method, the last 10 years have been about 10% inflation per year)
@tipoomaster
@tipoomaster Год назад
8:12 Junket?
@christerwiberg1
@christerwiberg1 Год назад
What will happen then? If Moores law stops or at least is slowing down and at the same time the compute tasks keep on increasing fast, the energy needed will increase really fast, that might be quite problematic in the coming years
@louistech112
@louistech112 Год назад
It will hit a physical limit after 1 nano meter
@christerwiberg1
@christerwiberg1 Год назад
@@louistech112 Since the diameter of a silicon atom is around 0.21 nm, I assume you are quite right, 5 atoms wide lines, not much more to improve else then software and ipc gains, that will improve but not as fast as previously
@WaterlooCranium
@WaterlooCranium Год назад
Keep the great content coming Rene! So fantastic to hear from Ben again. His TechPinions site and podcast were stellar!
@jeanchindeko5477
@jeanchindeko5477 Год назад
Is the economic issue here really about the production cost or is it more about how much profit shareholders want out of those companies at the eventual cost of refraining innovation? Companies understood that as soon you say “economic” or “economy” a large frange of the population will understand (wrongly) and not question that much the why they’re doing or not what they’re doing: take us all for fool!
@edgarsill8157
@edgarsill8157 Год назад
I really liked what they talked about, but I have a question: why has nobody talked about the fact that Apple has been stagnant for 2 years in its high-performance core designs? In 2020, when Apple released the A14 Bioinic chip, I presented it with the Firestorm cores, which reached a frequency of up to 3.0 GHz. The following year, I presented the Apple A15 Bionic with the Avalanche cores, which reached a frequency of up to 3.24GHz, and this year. Apple presented the a16 bionic chip with the Everest cores with a frequency of up to 3.46 GHz after 2 years and if you take the difference in frequency that Apple increased between its processors to see the increase in ipc that they would have achieved in 2 years, you get an improvement very small which is probably achieved by the increase in the L2 cache that apple has doubled in the high performance cores from a14 to a16 while in those same 2 years arm has achieved an ipc increase of more than 20% of a core cortex x1 to a cortex x3 nucleus
@Sinier940n
@Sinier940n Год назад
Well Apple has been 3 years on the same node.
@edgarsill8157
@edgarsill8157 Год назад
@@Sinier940n It has been 3 years in the same node but it is the first time that there are no changes in the high performance cores if you don't remember from the a12 chip to the a13 chip they used the same 7nm node and they were able to increase the ipc by more than 11% now it's been 3 years chips no changes to performance lake cores
@jasonmajere2165
@jasonmajere2165 Год назад
X86 is a very different beast than Arm. X86 has legacy that holds it back, code that is never used but needs to be there.
@Sinier940n
@Sinier940n Год назад
@@jasonmajere2165 True. But when you talk to the actual engineers, they say that all that legacy stuff is like 5% of the core die space and doesn't really have any effect on the overall efficieny.
@prashanthb6521
@prashanthb6521 Год назад
The clock speed is only a small part of the equation. There has been improvement in the architecture which produces more throughput compared to the old. So the fact that they dont have to clock higher is a plus point instead of drawback. Another metric is the amount of work done per watt where Apple has smashed thru the roof !!!
@thomasnewton8502
@thomasnewton8502 Год назад
Thanks!
@iamjvmac
@iamjvmac Год назад
I think they should focus next on bringing down the prices of smartphones. It's becoming more and more ludicrous
@platin2148
@platin2148 Год назад
Him saying building a 2nm fab in china is easy is a sign of not a lot of clue of the current situation it seems or did he include the RC meaning Taiwan the country.
@bbajarin
@bbajarin Год назад
I do a lot of work with TSMC and supply chain so well aware of location. My mouth got in front of my brain lol
@KingLarbear
@KingLarbear Год назад
@@bbajarin it happens
@francisdelacruz6439
@francisdelacruz6439 Год назад
Note AMD and Intel are laggards in performance per watt vs Apple. How can a specialist get this so wrong? Unfortunately it's the software that's lagging and is the driver of demand.
@JimONeil
@JimONeil Год назад
Rene, why the lack of an introduction? I found it hard to get into this conversation because it feels like I missed the beginning of it.
@MrAtthedrivein925
@MrAtthedrivein925 Год назад
I see you with that WAN show hoodie, respect!
@KingLarbear
@KingLarbear Год назад
I didn't even notice that. If you're correct that's dope that he didn't use this as a sponsor spot. But if you know then you know, you know lol
@MrPipsqueak-oq9kw
@MrPipsqueak-oq9kw Год назад
The funny thing is huawei is leading the way on the “stacking” technology…
@PWingert1966
@PWingert1966 Год назад
A real question is how much speed do you require to do what you want to do. Most iPhones rarely if ever run into their maximum performance even playing games.
@h4x0y
@h4x0y Год назад
You don't need all the power of your new devices, but you need a performance overhead if you want to future proof them for a few years.
@westsidehvac1097
@westsidehvac1097 Год назад
is not only about speed but efficiency
@PWingert1966
@PWingert1966 Год назад
@@westsidehvac10971097 Efficiency is such a broad term. What exactly does it need to be efficient? Heat dissipation and power usage I can understand, but each is balanced on the three-legged stool by performance to get a true understanding of the fit of the device to the needs you must define and characterize each of those parameters based on intended usage,
@sylviam6535
@sylviam6535 Год назад
PCs are in the same situation when used for general purpose/office work.
@gregghayes6710
@gregghayes6710 Год назад
Rene, I am a big fan of your videos. However, this time I must admit, I had a hard time absorbing it, and it wasn’t because the topic was uninteresting, or that it was too confusing, or that you or you’re interviewee was not compelling. I found myself distracted throughout the video wondering “who the heck is this guy that Rene is interviewing?” I was so distracted, and I don’t think I got as much out of it as I normally would because my brain was thinking “did I miss the intro?” “ is this guy so big that I should just know who he is?”FYI, I am blind and listen to your videos. was there a graphic that told us who this guy is? Perhaps a short verbal intro would have been appreciated BTW, scanning through some other comments, I think the sighted people in your audience had some of the same questions that I did.
@ye849
@ye849 Год назад
A. Your talking about proprietary technology amd has been developing and designing its chips around for years. Even if other companies adopt that approach it will still take a gen or 2 to catch up. B. It doesnt really make significant difference in processing npu/gpu both are matrix multipliers. Only one tech greatly benefits out of it which is memory - doesnt scale well C. Each company including Intel has been foreseeing this for a while. Intel for instance just released its roadmap for maintaining increasement throw architecture and transistor tech. They also have their tech for something they call tiles..
@palamidagheo4520
@palamidagheo4520 Год назад
or you can sell same hardware with different name and slowing it down over time via close source software
@shebanadam7569
@shebanadam7569 Год назад
Why don't we just refine our current coding structure to make applications and operating systems leaner and super efficient to "fly" even with a 10nm chip? I mean, I don't see the need for smaller and smaller nm numbers if we can still get the same performance for older chips with some ingenious coding tweaks. I remember the days Windows XP could fit in a 700mb CD and now, it's like a 4GB mamoth with endless services that just dont really benefit us that much. Back in the early 2010s when everyday smartphone applications never crossed 20mbs and now....we talking of GBs! I think the conversation should work in three directions. 1) Coding techniques 2) Efficiency per nm and 3) Battery technology.
@dinozaurpickupline4221
@dinozaurpickupline4221 Год назад
well said a clean mean leaner code & running few services could be the answer we need
@shebanadam7569
@shebanadam7569 Год назад
@@dinozaurpickupline4221 absolutely. Even these older smartphone chips pack a punch!
@dinozaurpickupline4221
@dinozaurpickupline4221 Год назад
yes,I'm rockin an older realme 7 pro I don't know when would chrome force me to use 12gb ram phone
@jank9525
@jank9525 Год назад
blame the os itself
@sylviam6535
@sylviam6535 Год назад
I downloaded the Adobe PDF Reader years ago and it was 60GB. I expect that it would be close to 80GB now, if not more. There is no way a PDF reader should be that size. Software bloat is real.
@AJB2K3
@AJB2K3 Год назад
The Limit for processors was hit years ago when they couldn't brake the 4GHz limit for a single core X86 processor, since then makers have had to move sideways in dev by adding more cores but lets the issue is still present. They have yet to make a +4GHz processor core. Edit: wow I feel old, was it really 2008?
@a.m.g.r7804
@a.m.g.r7804 Год назад
Interesting, can you explain how AMD is hitting 5.8Ghz on 7950x and Intel just releasing a 6Ghz 13900KS? Genuine question btw. Are you saying they are using tricks?
@AJB2K3
@AJB2K3 Год назад
@@a.m.g.r7804 This was 10 (ish years ago) due to the size limitation of the technology but now there are into the sub 10 nm process it looks like they are breaking the barrier again but as stated in the video they are struggling with heat dissipation and silicon tunnelling issue resulting in throttling of the processors to under the 4GHz speed
@diablosv36
@diablosv36 Год назад
@@AJB2K3 I don't understand what you are saying 4ghz was exceeded about 12 years ago atleast on desktop, there wasn't any issue with it throttling. Certainly more recent with 5Ghz+ things are getting more difficult to cool as power usage goes up.
@user-pq4by2rq9y
@user-pq4by2rq9y Год назад
I will be frank, I think we are about to hit a wall in the consumer demand for performance. We are about to have processors that perform like a 5950x on laptops, 3080/90 graphics performance on screens so tiny that you can barely distinguish 4k from 1440p. And I think we already reached that point with cellphones. I mean, how many people now buy cellphones because their previous one was too weak? On desktop, the best sellers right now as far as I am aware are ryzen zen 3, with the 5800x3d being particularly praised despite both intel and AMD just bringing their new cpus to the market. So, it makes me think, do your average consumer will really care for anything bellow 2nm? I mean, is improved battery life enough for a selling point? Perhaps I am wrong and there will always be a demand for performance by the general public but when I look to my phone... I only wish for something that would run just a little bit cooler. That’s not something that will make me spend hundreds of dollars on a new device.
@sylviam6535
@sylviam6535 Год назад
When AM5 motherboards and DDR5 RAM drop in price, people will move to Zen 4, but I agree with what you’re saying. BTW, I understand that 1 nm is where the physical size of atoms start to create real problems.
@nnn-pr3vr
@nnn-pr3vr Год назад
People want to run vr headsets off their phone with a little high resolution high refresh rate screen for each eye while getting decent battery life, there’s a long way to go
@KingLarbear
@KingLarbear Год назад
I think the largest issue is heat dissipation and cooling that would make computers go to the next level
@Tech-geeky
@Tech-geeky 9 месяцев назад
a expensive paperweight is all you'll have, as there is no point in battery/performance, if the apps are not there, and plugin's. To me, apps always dictate what device can be used. If the "right" apps are not there, its not gonna sell. Which is why Rosetta 2 is still in OS today and haven't been pulled yet. Cores and all make a huge different sure, but app development native come first.
@WhittyPics
@WhittyPics Год назад
I remember in the 90s processor speeds doubled every 18 months. Once they hit 1 GHz the gains seems to have slowed. You get to a point where it is harder and harder to make more gains. Once they hit a wall on processor speed, they started adding more cores.
@MightyRob1
@MightyRob1 Год назад
Was having this same conversation with my kids about how it was speed-speed-speed, then that leveled off, and then it became cores-cores-cores and now that starting to level off. I think the processors are so powerful now that they are rarely pushed to their maximum limit, I think the next ‘big thing’ is going to have to be software optimization for increased performance
@JoeCastellon
@JoeCastellon Год назад
However, GHz in the 90s aren't the same as the ones we have now. A 3 GHz from the 90s would PROBABLY have been the equivalent of today's 500 MHz
@dinozaurpickupline4221
@dinozaurpickupline4221 Год назад
people are looking at that,shifting loads on the server side to quantum based stuff,making os that keeps in account of quantum states & using AI seems to me like the next step,people need to colab more & instead of discussing these things on internet techathons should be launched around the globe,discussing these issues on main stream
@KingLarbear
@KingLarbear Год назад
And that's where you measure iops and fps and running code like mining and other tests to see what it does. But my phone has line 12gb of ram and unless it is under 45% it starts acting buggy
@sylviam6535
@sylviam6535 Год назад
@@MightyRob1 - The next thing will be accelerating the APIs in the CPU, but that may make the CPU wedded to an OS unless some generalized API for OSs is created, which I don’t see happening other than maybe for open source OSs.
@brothatwasepic
@brothatwasepic Год назад
We need dilitheum crystals captain
@biomorphic
@biomorphic Год назад
The atomic radius of silicon is about 0.11 manometers, so we are reaching the limit of this element. Something will have to replace it, eventually, or we'll have to replace the transistors with something else.
@Chalisque
@Chalisque Год назад
More efficient software. If we could only use machine instructions as sparingly as we did in the 70s and 80s, combined with modern CPU architecture and algorithms. Offloading as much as possible from the main CPUs, like old-school mainframes did. There _has_ to be a limit, and likely we're already close to it. Making better use of the available process nodes, in terms of microarchitecture. The ever increasing CPU speeds we've seen have made software development lazy so far as efficiency is concerned.
@gregpeterson3144
@gregpeterson3144 9 месяцев назад
If the CPU advancement stales completely, it will be fine by me. Why do we need more cpu power actually? People walk around with powerful computers in their pocket and do what with it? lol Even in science & engineering the cpu power is the least concern...so indeed, even if the cpus stop evolving - nothing bad would happen.
@greensguy1466
@greensguy1466 Год назад
nice
@oxoqunb5741
@oxoqunb5741 Год назад
should a 'D' be present in the 'AND' here?
@colinstock325
@colinstock325 Год назад
And the point of buying a faster computer than the one you bought last year is?
@shashank1630
@shashank1630 Год назад
I don’t even know where to start with the answer….
@javastream5015
@javastream5015 Год назад
He forgot the inflation is his argument about R&D.
@KingLarbear
@KingLarbear Год назад
Where is Risc V when you need it the most
@anthonyr1479
@anthonyr1479 Год назад
Wow, what a cool video
@dr.mikeybee
@dr.mikeybee Год назад
The big shift is the move to AI-designed systems. NVIDIA has more AI-designed components on new GPUs than human-designed components. Chiplets, SOCs, etc. will be chosen by optimizing algorithms, not design teams. Moreover, AI design will democratize computer development.
@IanHobday
@IanHobday Год назад
I enjoyed this but didn't appreciate the way the guest repeatedly conflated Taiwan and China. There are no leading edge chips being fabbed in China, only in Taiwan.
@platin2148
@platin2148 Год назад
The software needs todo the change now. It's years now since there was anything major new made for better performance and less memory and cpu/gpu usage. By hardware we can only scale if we put tons of accelerators in which is still no fix for jajf and the others..
@prpramod
@prpramod Год назад
Richie Ajja , namaskar
@Mentaculus42
@Mentaculus42 Год назад
Why does the guest say build a 2 nanometer fab in “China”?! Does he know something about China invading Taiwan [Republic of China (ROC)]? The higher density fabs are NOT built in mainland China and that is not by mistake. The United States has expended substantial effort to make it illegal to build the newest technology fabs on the mainland. This isn’t just a little detail as mainland China getting control of Taiwan would have huge impacts on chip production, much more than what interconnection technology will be used. Besides, the gating technology for higher density chips is controlled by ASML Holding N.V. (commonly shortened to ASML, originally standing for "Advanced Semiconductor Materials Lithography") is a Dutch multinational corporation founded in 1984. ASML specializes in the development and manufacturing of photolithography machines which are used to produce the highest density computer chips. The other part of the equation is financial resources and execution of the application of this resources (which Intel seems to have continuous problems with the execution part). The guest even gets corrected about Taiwan but still doubles down on saying China. Very strange.
@jSyndeoMusic
@jSyndeoMusic Год назад
Yeah, that bit is mad sketchy…
@havencat9337
@havencat9337 Год назад
because they are one ;)
@mantapdjiwa9768
@mantapdjiwa9768 Год назад
If you are not aware, taiwan is a province of china
@Mentaculus42
@Mentaculus42 Год назад
@@mantapdjiwa9768 Yeah, of the Republic of China! It is unlikely that the ROC will regain control of the mainland, but to be a hand puppet of Winnie-the-Pooh’s hegemony is a sad joke. It was to the US’s benefit to maintain a fiction of “ambiguity” to foster non-adversarial relations with the mainland in the past but times are a changing. The big question is whether the US is willing to go to war to curb the ambitions of Winnie-the-Pooh!?! Irrespective of the old fiction of “one country, two governments”, Hong Kong is the playbook for Taiwan if the people there want the fiction to end. Lately the ROC population has been scared significantly by the mainland’s intimidation, so maybe they will rollover belly-up and be pounded into submission like Hong Kong, or maybe NOT!
@Tech-geeky
@Tech-geeky 9 месяцев назад
Goodbye Moore's-law. 👋
@zxttgg
@zxttgg Год назад
I think the battery technology needs to improve. Apple hardware design is not Friendly to DIY user. Everyone can write code Back to the 80th. Now you need to pay 99 dollars to use xcode
@winstonsmith935
@winstonsmith935 Год назад
The future Power requirements won’t be able to support future Intel chips which are power hungry and heat generators.
@Jemmartin
@Jemmartin Год назад
It’s like the three-body problem is happening.
@Sinthasized
@Sinthasized Год назад
At a consumer level, how much more powerful machines do we need? I bought an M1P MBP and it's going to last well over 5 years before I even need to think about MAYBE upgrading it. I get people like you Rene, who need to process massive files and workloads to run video channels, but the average consumer is reaching a point in tech where anything they buy is probably much more powerful than they need, smartphone, laptop, tablet, etc. Even for you, I can only see maybe a few years before it's all capable of handling your workload completely with no wait or as little as possible.
@DistrosProjects
@DistrosProjects Год назад
Exactly! 10 year old computers that were $1000+ when new (and would probably have dual-core CPUs, 4 or 8GB of RAM, and 500GB SATA hard drives) are totally usable for most people's use case if you install an SSD and Linux or even just a fresh copy of Windows 10. 10 years ago, you could most definitely not use a computer from 2002 (which would probably have a single core CPU, 256MB of RAM, and a 40GB IDE HDD) for modern tasks, they would be too slow. You could likely keep using that machine for at least 10 years unless Apple decides to slow it down significantly in a software update (even then, you could use Asahi Linux instead).
@melgross
@melgross Год назад
I read this every so often. You can look back in time and see people asking the same question going be ask to the transition from 8 bit to 16 bit, 16 bit to 32 bit and 32 bit to 64 bit. We were told back when that the new 1GHz Pentium was as fast as we would ever need. Even further back, when the IBM 286 came out that it would do whatever a business ever needed. They were wrong then, and thinking that now is also wrong.
@racgordon
@racgordon Год назад
For perspective…… I am typing this walking around my kitchen on an Ipad as I listen to streamed music contemplating the fact that 10 years ago i would have been able to do the same but standing at a counter on a Macbook that cost maybe twice as much in real terms….20 years ago I would have been sitting and would have been listening to the music played on a CD though speakers fighting with the cooling fan on Windows XP Laptop and 30 years ago I would have been listening to the radio typing on a 386 desktop using AOL dial up. 40 years ago I might have been using Word Perfect typing a letter, quietly humming to myself (careful not to disturb someone in the next cubicle at work) and whilst I looked for an envelope and a stamp……. Ok wait even my handwriting looks better than a dot-matrix printer……..
@shashank1630
@shashank1630 Год назад
@@racgordon except for your soundtrack was recommended to you by a massive amount of cloud computation.
@tenminutetokyo2643
@tenminutetokyo2643 Год назад
They have been saying this for 40 years.
@nycbike73
@nycbike73 Год назад
that's not true, it will matter for watches and smart phones as well. Everyone wants a phone and watch that runs faster and uses less battery power.
@imoddi
@imoddi Год назад
I hope that MacBook Pro computers soon also come with possible to have eSIM card so people how travel much can use MacBook Pro without connecting it to iPhone then we are I places without WiFi!
@sylviam6535
@sylviam6535 Год назад
Great, more tracking!
@dune2024
@dune2024 Год назад
power draw does not keep increasing in fact it decreases the smaller nanometer chips you get
@winstonsmith935
@winstonsmith935 Год назад
Apple gives you a 10% incremental increase and increase the price by 25%. How much faster do you really need to type a letter, how many dedicated Video Editors are their? Engineering 3D models seem to work perfectly at present, how much faster does an Artist need to paint? Still see lots of people hang out in the Coffee Room, so what’s their hurry for faster computers. The present Human Brain can’t work as fast as the present fastest chips, so exactly what is all the hype. ITS ALL IN THE MIND, 95% have computers, iPads, iPhones that are faster than they can think.
@KingLarbear
@KingLarbear Год назад
I agree with you. But I think it comes down to playing games or running videos or maybe virtual and augmented reality. Or multi-tasking. I might want to watch a video while looking something up while encoding a video. So we can still use some gains. Maybe in the days of ddr7+ we can don't need that much. But ddr5 is definitely needed because ddr4 isn't cutting it
@sylviam6535
@sylviam6535 Год назад
Office computers were fast enough 10 years ago. Rendering keeps pushing the limit. It all depends on what you are doing.
@93hothead
@93hothead Год назад
Stop making it smaller for now. And instead increase efficiency of the materials used instead
@hescominsoon
@hescominsoon Год назад
is anyone looking at risc v? right now arm is about to rule the world..we had an x86 monoculture and are now faincgin an arm monoculture.
@beloved_child
@beloved_child Год назад
Second... Also, I think it's going to transition into offboarding with cloud computing. A small highly efficient chip that can communicate instructions at a high rate and low latency with processors on the cloud. With advancements in wireless connectivity, eventually, wireless links are going to be fast enough to handle it...
@goobfilmcast4239
@goobfilmcast4239 Год назад
Agreed. The future for consumer-level "computing" is mobile. Powerful, pocketable Phone-sized devices will be designed to communicate with external devices wirelessly. Like current Bluetooth devices, future peripherals (think: big monitors along with glasses and other wearables) will "link" with your "mobile" device .... blurring, then eliminating the line between pocketable devices, tablets, laptops and desktops.
@jank9525
@jank9525 Год назад
no its not cheap
@genxster
@genxster Год назад
I really feel bad when you ignore AMD and Mediatek
@Phoenix-King-ozai
@Phoenix-King-ozai Год назад
Not the end, The begining for qualcolm The snapdragon 8 Gen 2 is already on par with the A16 in everything but single core CPU performance It actually beats it comfortably in GPU and Ram performance Is it the end for Apple silicon ? Apple can advertise its single core performance all it wants by humiliating Qualcolm, equating it with A13 The fact is, the A16 is hardly 20% faster than the A13 and less than that with the 8 gen 2 The difference between the A12 and the snapdragon 845-855 was far more than its now The 8 gen 2 is far faster than chips 3 years older and Qualcolm is showing no signs of stopping Maybe Apple is reaping the results of its chip team talent loss But Hope not
@earnistse4899
@earnistse4899 Год назад
I think desktop should move to arm architecture or something similar. X86 is starting to stagnate in terms of performance efficiency/perf gains with new process nodes as well.
@DerrickBest
@DerrickBest Год назад
ARM architecture has reached it's limitations as well. Isn't the latest Snapdragons stagnated at 3ghz. The last two flagship chips had cooling issues.
@earnistse4899
@earnistse4899 Год назад
@@DerrickBest arm hasn’t reached its limit. 3ghz is the clock speed because it’s a mobile device and raising clocks would increase power consumption. The new snapdragon is significantly faster than the previous while keeping similar clocks though.
@BooleanCorporation
@BooleanCorporation Год назад
Care to remember that ARM is not solely Snapdragon. Apple's chips are based on ARM and they include x64 instructions sets to better accommodate legacy code, but once the need for that emulation is dropped, the chips will be even faster for native code. Also, MediaTek is making some incredible advances on performance per watt as well. If you check Dimensity 9100 benchmarks you'll see important improvements. So, the shift from raw power to dedicated silicon is key to the future of computing. Instead of a one-size-fits-all you get software backed hardware that can expand where raw power finds a limit.
@cedricdellafaille1361
@cedricdellafaille1361 Год назад
@Earnist Se but they did stalled. If they were getting faster chips they could lower consumption a bit and increase clock speed.
@winstonsmith935
@winstonsmith935 Год назад
Apple is playing the hype and profit game.
@United_Wings
@United_Wings Год назад
Quantum computing 🤔🤔
@havencat9337
@havencat9337 Год назад
RISCV
@bluecalix
@bluecalix Год назад
Apple is already doing this.
@deusmediaworks515
@deusmediaworks515 Год назад
Furst
@gregsLyrics
@gregsLyrics Год назад
And then IBM creates the photon CPU - millions of times faster than the fastest imagined silicon, millions of times less power.
@cutejapangirl1117
@cutejapangirl1117 Год назад
Yeah photonic.... The quantum computing I Heard decade ago... Why it isn't achieving mainstream success? Lifi the using wifi with light? Why it isn't become mainstream?
@dinozaurpickupline4221
@dinozaurpickupline4221 Год назад
@@cutejapangirl1117 lol,you sir ask the darnest questions
@catchnkill
@catchnkill Год назад
It is different. You cannot run the currrent software on it. It may be able to speed up certain very specific computing tasks. It will not be the notebook computer chip that you carry everyday.
@KingLarbear
@KingLarbear Год назад
@@dinozaurpickupline4221 lol the name said Girl and you said Sir lol
@dinozaurpickupline4221
@dinozaurpickupline4221 Год назад
@@KingLarbear how am I suppose to believe in she'ss a girl,lots of sissy/s running around pretending to be a girl
@dison513
@dison513 Год назад
Solve global warming with a Pentium
@peacekeeper7968
@peacekeeper7968 Год назад
Menudo cuñado diciendo tonterías, no sabe ni de qué habla el pobre... Como se aburren los cuñados con dinero jajajajaj
@lamasteve6905
@lamasteve6905 Год назад
His sound is poor ?
@KingLarbear
@KingLarbear Год назад
I don't think so. He doesn't seem to have a high end microphone but any modern phone that isn't budget could help this out
@maxcolvin9209
@maxcolvin9209 Год назад
Do you think Apple will ever put an M1 chip in an iPhone?
@dimitrimoonlight
@dimitrimoonlight Год назад
Obviously chip future is ARM based.
@jank9525
@jank9525 Год назад
its not ready yet , everything is still a mess
Далее
M1 - How Apple DESTROYED Intel i5
35:03
Просмотров 654 тыс.
Apple vs Intel vs Qualcomm - Who wins 2023!
17:24
Просмотров 64 тыс.
Apple's Silicon Magic Is Over!
17:33
Просмотров 829 тыс.
OpenAI's STUNS with "OMNI" Launch - FULL Breakdown
27:07
End of the silicon era. Processors of the future
19:26
Просмотров 312 тыс.
Why New York’s Billionaires’ Row Is Half Empty
28:38
How Apple Just Changed the Entire Industry (M1 Chip)
26:28
Someone improved my code by 40,832,277,770%
28:47
Просмотров 2,3 млн
How Chip Giant AMD Finally Caught Intel
15:08
Просмотров 1,4 млн
A New Way to Achieve Nuclear Fusion: Helion
30:48
Просмотров 7 млн
Health tech hot takes with @DoctorMike
13:46
Просмотров 25 тыс.
How about that uh?😎 #sneakers #airpods
0:13
Просмотров 8 млн
Самый дорогой корпус Hyte Y70
0:52
Самый дорогой корпус Hyte Y70
0:52
Распаковал Xiaomi SU7
0:59
Просмотров 2,7 млн