Тёмный

Is Moore's Law Finally Dead? 

Sabine Hossenfelder
Подписаться 1,3 млн
Просмотров 467 тыс.
50% 1

🌏 Get our exclusive NordVPN deal here ➼ NordVPN.com/sabine It’s risk-free with Nord’s 30-day money-back guarantee! ✌
This video comes with a quiz! Check how much you understood: quizwithit.com/start_thequiz/...
Correction to what I say at 04:07 That should have been ebeam screening, not ebeam lithography. Sorry about that.
In the past 10 years or so, tech specialists have repeatedly voiced concerns that the progress of computing power will soon hit the wall. Miniaturisation has physical limits, and then what? Have we reached these limits? Is Moore’s law dead? That’s what we’ll talk about today.
💌 Support us on Donatebox ➜ donorbox.org/swtg
🤓 Transcripts and written news on Substack ➜ sciencewtg.substack.com/
👉 Transcript with links to references on Patreon ➜ / sabine
📩 Sign up for my weekly science newsletter. It's free! ➜ sabinehossenfelder.com/newsle...
👂 Now also on Spotify ➜ open.spotify.com/show/0MkNfXl...
🔗 Join this channel to get access to perks ➜
/ @sabinehossenfelder
🖼️ On instagram ➜ / sciencewtg
00:00 Intro
00:53 Moore’s Law And Its Demise
06:23 Current Strategies
13:14 New Materials
15:50 New Hardware
18:58 Summary
19:31 Special Offer for NordVPN
#science #technology #mooreslaw

Наука

Опубликовано:

 

17 май 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 1,9 тыс.   
@SabineHossenfelder
@SabineHossenfelder 7 месяцев назад
This video comes with a quiz to help you better remember its content! quizwithit.com/start_thequiz/1694145758807x361000584255219800
@funtechu
@funtechu 9 месяцев назад
A few comments from a chip designer. 1) Regarding the transistor size limit, we are pretty close to the absolute physical limit. Although the minimum gate length equivalent figure (the X nm name that is used to name the process node) only refers to one dimension (and even that's not quite that simple), we are talking dimensions now in the high single digits, or low double digits of atoms. 2) Regarding electron tunneling, this is already quite common in all the current modern process nodes. This shows up as a consistent amount of background leakage current, however as long as it's balance (which it typically is) then it doesn't cause logical error per-se. However, it does increase the amount of energy that is just being turned into heat instead of performing any useful processing, so it does slightly cut into the power savings of going to a smaller node. 3) One of the biggest things impacting Moore's law in the context of the transistors/chip interpretation is manufacturing precision, crystal defects, and other manufacturing defects. Silicon wafers (and others as well) have random defects on the surface. Typically when a design is arrayed up on the surface, during later wafer test a handful of the chips will not turn on due to these defects, and thus are discarded. The ratio of good transistors to total transistors is referred to as the wafer yield. As long as the chips are small, then a single defect may only impact yield a little bit because the overall wafer has hundreds or thousands of possible chips, and only a few hundred defects that could kill a chip. But as chips get larger, yield tends to go down because there are fewer chips per wafer. There are some techniques like being able to turn off part of a chip (this is how you got those 3 core AMD chips for example), but ultimately as chips get larger, the yield goes down, and thus they get more expensive to manufacture. 4) As discussed in this video, what people really care about isn't transistor density, or even transistors per package. Rather, they care about computing performance for common tasks, and in particular for tasks that take a long time. By creating custom ASIC parts for new tasks that are compute intensive (ML cores, dedicated stream processors, etc), the performance can be increased so that the equivalent compute capability is many times better. This is one of the areas of improvement that has helped a lot with performance, indeed even outpacing process improvement. For example, dedicated SIMD cores, GPUs with lots of parallel stream processors, voice and audio codec coprocessors, and so on. Anyway, overall a great video on the topic as always!
@dr.zoidberg8666
@dr.zoidberg8666 9 месяцев назад
Surely these improvements in chip architecture (idk if that's the right way to put it) also has its limits. Ik it's not very planned obsolescence of me, but what I'd really like is for manufacturers (& programmers) to focus on increasing the longevity of computers. It'd be nice to only need to buy a phone once every 10 or 20 years instead of once every 2 or 3 years.
@Pukkeh
@Pukkeh 9 месяцев назад
Process nodes like "X nm" don't refer to an actual physical channel length, gate pitch or any other transistor dimension anymore. Unfortunately this video doesn't help clear up that common misconception. They are merely marketing labels loosely correlating with performance or transistor density. In particular, channel length (i.e. transistor size) shrinking has essentially halted a while ago at ~20 nm, so a 5 nm transistor isn't nearly as small as "5 nm" would indicate.
@funtechu
@funtechu 9 месяцев назад
@@Pukkeh Yes, that's why I used the term "minimum gate length equivalent" instead of saying minimum gate length. The actual minimum feature sizes are larger than the marketing minimum gate length equivalent number , but they are still quite small.
@Pukkeh
@Pukkeh 9 месяцев назад
@@funtechuI understand, I'm just clarifying for the benefit of anyone else who might be reading these. Most people outside the field think the process node name refers to some physical transistor feature size. This hasn't been the case for years.
@funtechu
@funtechu 9 месяцев назад
@@dr.zoidberg8666 Absolutely, though those design limits are much harder to put your finger on. For example, Amdahl's law sets a limit on how much you can parallelize something, but even that can sometimes be worked around by choosing a completely different algorithm that manages to convert what was previously though to be strictly sequential to some clever parallel implementation. As for longevity there are two major aspects of that. For phones, the biggest thing that impacts longevity is battery life, which could be remedied by having battery replacements. Personally I keep most of my phones about 5 years, with typically one battery swap in the middle of that period. Most people though buy newer phones just because they want the new features of a newer phone, not because their older phone died. The physical process limit on longevity is primarily driven by electromigration, where eventually the physical connections in a chip wear down and disconnect. In mission critical chips there is a fair amount of effort put into ensuring redundancy in current paths to try to improve longevity and reliability, but the fact is that heat dissipation is one of the largest factors that impact this in practice. Keep your electronics cool, make sure they have adequate cooling, and they will run longer in general. Note that this is also more of an issue with newer process technologies because metal layer trace widths are typically much smaller than they have been with older process nodes, meaning that electromigration doesn't have to work as much to break a connection. So with higher density comes the tradeoff of slightly shorter lifespan.
@JorenVaes
@JorenVaes 9 месяцев назад
An interesting take on Moore's law that I heard from Marcel Pelgrom (a famous name in the semiconductor world) during one of his talks was that, for a lot of fields in electronics, there was little 'true' research in novel techniques until Moore's law started failing us. Up to that point, the solution to everything was 'you have more transistors now so just throw more transistors and complexity at the problem'. By the time you came up with a new novel technique to improve performance with the old process, people who just added more transistors to the old technique in the newer process nodes had caught up already or surpassed your performance gains. You see this now where the calibration processor in some cheap opamp might technically be more 'powerfull' than the entire computer network that put men on the moon - just to compensate for the crappy performance of the opamp - even a 'dumb' automotive temperature sensor in a car might have more area dedicated to digital post-processing to improve lifetime, stability and resolution. It is now that we no longer get these new gains from Moore's law (both because scaling decreases but also because it is just getting so, so, so expensive to do anything that isn't leading edge CPUs and GPUs and FPGAs in these nodes) that people are going back to the drawing board and coming up with really cool stuff to get more out of the same process node. I can partially confirm this from my own experience in research on high-speed radio circuits. For a long time, they just used the smaller devices to get more performance (performance here being higher data rate and higher carrier frequencies). This went on for decades, up to the point that we hit 40 nm or 28 nm CMOS, and this improvement just... stopped. In the last 10 years, it is just an ongoing debate between 22 nm, 28 nm, and 40 nm on what is the best. But still, using the same technology node as researchers 10 years ago, we more than 50x the data rate we achieve, and do so at double the carrier frequencies, simply by using new techniques and better understanding of what is going on at the transistor level.
@RobBCactive
@RobBCactive 9 месяцев назад
That wasn't really true for CPUs in the early 90's memory speed began falling behind CPU speed and chips had to develop caches, instruction level parallelism, pre-fetch and out of order execution. The key to the old automatic performance gains was Dennard scaling which meant smaller transistors were faster, more power efficient and cheaper. Now process nodes require complicated design and power management to avoid excess heat due to power leakage as well as develop more efficient transistors in 3D like FinFET and now gate around, all more expensive to make. Now cache isn't scaling with logic, so Zen4 with V-cache bonds 7nm cache optimised process with the 5nm logic of the cores, interfaced externally via a 6nm IOD chiplet.
@jehl1963
@jehl1963 9 месяцев назад
Yup. In many respects, I think that highlights the difference between science and Engineering. A lot of engineering resources are invested in incremental improvements. It may not be glamourous, but it is meaningful. Just by constantly smoothing out the rough edges in process, important gains are made. Let's hear it for all of those engineers, technicians, operators and staff who continue to make constant improvements!
@edbail4399
@edbail4399 9 месяцев назад
yay@@jehl1963
@allanolley4874
@allanolley4874 9 месяцев назад
Another complication is that even in the 70s and 80s say although the effect was often just making a transistor smaller each time you made transistors smaller you would be faced with new problems. New lithography techniques were required, transistors would have new physical effects when made smaller that would have to be compensated for and so on. Each new level of miniaturization was proceeded by a lot of research into different techniques that might be able to allow work at the new level and many of those programs had false starts, failures and diversions (Charge Couple Devices were originally used as memory before being used in digital cameras etc.). It has been suggested that Moore's law became a goal for the semi-conductor industry and they anticipated and funded the research on the stuff they anticipated needing given the goal. This is all in addition to innovation or the lack of innovation in organization of chip sets and hardware as with cache, GPUs, math coprocessors, more attempts to implement parallel computing and so on.
@JorenVaes
@JorenVaes 9 месяцев назад
​@@RobBCactive Perhaps I should have made it a bit clearer in my original comment - I'm an analog/RF guy, and so is Marcel Pelgrom, so it might very well his comments only apply to RF/analog. At the same time from my limited expertise in digital it does kinda apply there too - sure, there was lots of architectural stuff that happened (which also happened in analog, because you have to architecture your complex blocks first) but on true low-circuit level that wasn't really the case up to recently where you see people coming up with things like error-detection per register stage to detect that 1-in-a-million event where you do use the critical path, allowing you to run closer (or even over) the edge for 99% of the time and fix the issues in the 1% of the time they occur, instead of running at 3/4 the speed but have 0% errors, and piss away 25% more power doing so. Sure the design also becomes more complex (as my cadence EDA bills indicate...) but from what I can tell a lot of that is handled at EDA level.
@GraemePayne1967Marine
@GraemePayne1967Marine 9 месяцев назад
I am old enough to remember the transition from vacuum tubes to transistors. At one time, transistors were large enough to see with the unaided eye and were even sold individually in local electronics shops. It has been very interesting to watch the increasing complexity of all things electronics ...
@softgnome
@softgnome 9 месяцев назад
They are still sold individually and higher power applications require large transistors, you can't pump a lot of power through a nanometer scale component.
@johndododoe1411
@johndododoe1411 9 месяцев назад
​@@softgnomeYep, and many current chips require a standalone transistor nearby for some key job .
@DamnedSilly
@DamnedSilly 9 месяцев назад
Ah, the days when 'Solid State' meant something.
@iRossco
@iRossco 9 месяцев назад
@@softgnome high power transistors for what type of applications?
@iRossco
@iRossco 9 месяцев назад
I'm 60...Vacuum tubes were huge, yeah I remember the tv repair man out to replace a blown tube in my parents B&W telly, could watch them glowing through the vents. These guys here talking in terms of "high singles to low double digit atoms" fuck me dead! 🤪 Where to in the next 50yrs? I forced to learn to use a slide rule in grade 12 instead of our 'modern' calculators in '79.🤦‍♂️
@FloatingOnAZephyr
@FloatingOnAZephyr 9 месяцев назад
I can’t imagine how much time you put into researching your videos. Thank you.
@domino2560
@domino2560 9 месяцев назад
Badly resecting*
@domino2560
@domino2560 9 месяцев назад
@theadjudicator7323 Some of her errors aren't even in Wikipedia, so that would be an improvement; or she just doesn't read it properly.
@Mitchell_is_smart._You2bs_dumb
@Mitchell_is_smart._You2bs_dumb 9 месяцев назад
not enough
@FloatingOnAZephyr
@FloatingOnAZephyr 9 месяцев назад
@theadjudicator7323 Look up ‘literally’ in the dictionary.
@FloatingOnAZephyr
@FloatingOnAZephyr 9 месяцев назад
@theadjudicator7323 You did misuse it, whilst being rude. Not sure what the rest of that nonsense was supposed to mean.
@markus9541
@markus9541 9 месяцев назад
My first computer was a Commodore VC-20, with 3 KB of RAM for BASIC. Taught myself programming at age 11 on that thing... good old times.
@memrjohnno
@memrjohnno 9 месяцев назад
Sinclair ZX 80 for years then a Dragon 32 with was a step up from VC-20 but below the Commodore 64 which came out ~ year after. Pre internet and typing in code out of a magazine are loaded/saved on audio tape. Heady days.
@ziegmar
@ziegmar 9 месяцев назад
@@memrjohnno I remember that wonderful times 😊
@quigon6349
@quigon6349 9 месяцев назад
My first computer was the Tand6 Color Computer 2 with 64k ram.
@Yogarine
@Yogarine 9 месяцев назад
My family’s first computer was an actual C128 as well. Though it spent most of its days in C64 mode because of the greater software library that offered, I actually learned programming in C128’s Microsoft BASIC 7.0 😆 Like you said… good old times.
@imacmill
@imacmill 9 месяцев назад
The VIC-20 was my first computer, too, but I moved to Atari PCs pretty quick. The 400, then 800, then ST. I taught myself to code on the Atari 400 using a language called 'Action'. Peeks and Pokes were the bleeding edge coding tech on it, and I also learned the ins and outs of creating 256-color sprites by using h-blank interrupts...sophisticated stuff back then. Antic Magazine, to which I had a subscription, was my source for all things Atari. Speaking of Antic Magazine, they once held an art competition for people to showcase their art skills on an Atari. My mom had bought me a tablet accessory for my 400 -- yup, a pen-driven tablet existed back then, and it was awesome -- and I used it to re-create the cover art of the novel 'Dune', a book I was smitten with at the time. I sent my art to Antic, on a tape!!, and I was certain I was going to win, but alas, nada. Fantastic memories, thanks for the reminder...
@djvelocity
@djvelocity 9 месяцев назад
I opened RU-vid and the first video in my feed was “How Dead is Moore’s Law?”. Thank you RU-vid algorithm 🙌
@LyleAshbaugh
@LyleAshbaugh 9 месяцев назад
Same here
@melodyecho4156
@melodyecho4156 9 месяцев назад
Same!
@RunicSigils
@RunicSigils 9 месяцев назад
Well of course, you would be talking about your subscription feed, right? You're not a monkey using the homepage and letting a company decide what you watch, right? Right?
@BradleyLayton
@BradleyLayton 9 месяцев назад
I opened my messenging app and saw the link forwarded from my daughter. Thanks, genetic algorithm!
@Broockle
@Broockle 9 месяцев назад
Such a holistic approach to answering the question. Going down so many avenues and to show us how nuanced the subject is. This was awesome 😀
@viktorkewenig3833
@viktorkewenig3833 9 месяцев назад
"The trouble is the production of today's most advanced logical devices requires a whopping 600-100 steps. A level of complexity that will soon rival that of getting our travel reimbursements past university admin". I feel you Sabine
@Unknown-jt1jo
@Unknown-jt1jo 9 месяцев назад
I love how she delivers this like it's an inside joke that 99% of her viewership can relate to :)
@AICoffeeBreak
@AICoffeeBreak 9 месяцев назад
I laughed so hard at this. 😆
@viktorkewenig3833
@viktorkewenig3833 9 месяцев назад
we all know the feeling@@AICoffeeBreak
@dzidmail
@dzidmail 9 месяцев назад
How many steps?
@nonethelesszero7950
@nonethelesszero7950 9 месяцев назад
Since you stated it as anywhere from 600 steps UP TO 100 steps, we're now talking about the complexity and logic of university HR processes.
@johneagle4384
@johneagle4384 9 месяцев назад
My first computer was a monstrous IBM mainframe. Punch cards and all. Ahhh... those were days, which I did not miss. Things that would take hours then, I can do in a few seconds today.
@WJV9
@WJV9 9 месяцев назад
Mine was an IBM 360 mainframe, type up a Fortran program on punch cards, add the header cards to your deck and wrap it with a rubber band and submit it along with your EE Course # and Student ID#. That was back in 1965. Then wait an hour or two and find out you have syntax errors so you go back and edit out the typo's and syntax errors by punching new cards, submit the deck again and wait a few hours. Those were the days
@Steeyuv
@Steeyuv 9 месяцев назад
Started on those as a young man in 1980! These days, funnily enough, things that used to take me seconds, now take somewhat longer…
@tarmaque
@tarmaque 9 месяцев назад
Good lord! And I thought _I_ was old!
@radekhn
@radekhn 9 месяцев назад
@@Steeyuv, yes I remember. You switch the power on, and in few hundreds milliseconds you got an answer: READY What a time it was.
@appaio
@appaio 9 месяцев назад
I started with an abacus. THOSE were the days! challenge won?
@MyrLin8
@MyrLin8 9 месяцев назад
Love this one. Your [Sabine's] compairson between the complexity of designing chips and getting reimbursment paperwork through an associated bureaucracy is sooo excellent. :)
@rammerstheman
@rammerstheman 9 месяцев назад
Great video Sabine! I recently finished working at lab who are trying to use similar materials to graphene to try and replace and supersede silicon. One tiny error about 4 mins in was you described e-beam lithography as a technique for characterising these devices. What you go on to describe is scanning electron microscopy. E-beam lithography is a device fabrication technique mostly used in research labs. Confusingly the process takes place inside an SEM! Lithography is patterning of devices, not a characterisation technique.
@YouHaventSeenMeRight
@YouHaventSeenMeRight 9 месяцев назад
e-beam lithography (a form of mask-less lithography) was seen as a possible successor to "optical" or photo lithography (which uses masks to form the structure layers). Unfortunately it is not high speed enough to rival the production speed of current "optical" photo lithography systems (which expose each layer via a mask at once, almost like a high powered slide projector, so can achieve a higher throughput than e-beam systems, which need to scan across each chip on a wafer to create each layer shape). So for now it is relegated to specialist applications and research work. One of the companies that was building e-beam lithography systems for chip production, Mapper Lithography, went bankrupt in 2018 and was acquired by ASML. ASML has not continued the work in e-beam lithography as they are the sole producers of EUV photo lithography machines.
@Hexanitrobenzene
@Hexanitrobenzene 9 месяцев назад
"Confusingly the process takes place inside an SEM!" So whats the difference ? Electron energy or something else ?
@rammerstheman
@rammerstheman 9 месяцев назад
@@Hexanitrobenzene in an SEM measurement, you want to scan the beam over your sample to collect a pixel-by-pixel map of a signal like the number of electrons that are backscattered. Whilst you're scanning the beam, it has a lot of energy so will interact strongly with your sample. In e-beam lithography, the beam traces out the pattern you want to develop and you use materials that harden or soften under the electron beam. So I guess the biggest difference in the shape the beam scans in. I think possibly one might use higher electron doses for lithography too.
@Hexanitrobenzene
@Hexanitrobenzene 9 месяцев назад
@@rammerstheman Hm, seems like the main difference is in the preparation of the sample, not electron beam. Perhaps an analogue of photoresist which is sensitive to electron irradiation is used.
@adrianstephens56
@adrianstephens56 9 месяцев назад
I joined Intel in 2002. At that time people were predicting the end of Moore's Law. My first "hands on" computer was a PDP-8S, thrown away by the High Energy Physics group at the Cavendish and rescued by me for the Metal Physics group. From this you can correctly infer the relative funding of the two groups.
@johndododoe1411
@johndododoe1411 9 месяцев назад
I guess your Metal Physics group didn't focus on the extremely well funded tube alloys ... :-)
@sehichanders7020
@sehichanders7020 9 месяцев назад
Predicting the end of Moore's Law really has become quite akin to fusion power being commercially available. Probably both will happen at the same time.
@chrisc62
@chrisc62 7 месяцев назад
I was told in 1984 the limit of optical lithography was 1um or 1000nm, now they say they are going into production with a 10nm half pitch next year, I did my undergraduate Physics project in the Metal physics group at the Cavendish on Scanning Electron Acoustic Microscopy.
@AAjax
@AAjax 9 месяцев назад
When many people use "Moore's Law" as a shorthand that compute will get faster over time, rather than a doubling of transistor density. Wrong though that is, it's a common usage, especially in the media. The speed increases we get from the doubling alone have stagnated, which is why cpu clock speeds are also stagnant. Nowadays, the extra transistors get us multiple cores (sadly most compute problems don't parallelize neatly) and other structures (cache, branch prediction, etc) that aren't as beneficial as the raw speed increases we used to get from the doubling.
@davidmackie3497
@davidmackie3497 9 месяцев назад
Yeah, in the early days, the new computer you'd get after 4 years had astounding performance compared to its predecessor. But also, how fast do you need to open a word processing document? My inexpensive laptop boots up from a cold start in about 5 seconds, and most applications spring to life with barely any noticeable lag. What we still notice are compute-intensive tasks like audio and video processing. And those tasks are amenable to speed-up from specialized chips. Pretty soon those tasks will be so fast that we'll barely notice them either. But by then we'll be running our own personal AIs, and complaining about how slow they are.
@johnbrobston1334
@johnbrobston1334 9 месяцев назад
Sadly, I've got a problem at work that will parallize nicely, but it's written in an ancient language for which no parallel implementation has been produced, so it needs a complete rewrite before in order to do that. And there's no time to do it.
@darrennew8211
@darrennew8211 9 месяцев назад
The Amiga computer from the early 1980s had a half dozen specialized chips. The memory-to-memory DMA chip (the "blitter"), a variety of I/O chips, something akin to a simple GPU, separate audio chips, etc. It was very common back when CPUs were 7MHz instead of 7000MHz. There's also stuff like the Mill Computer, which is not machine-code compatible with x86 so hasn't really taken off. It's simulated to be way faster for way less electricity than modern chips. One of the advantages of photonic computing is also that the photons don't interact, so it's easier to route photons without wires because they can cross each other without interfering.
@oysteinvogt
@oysteinvogt 9 месяцев назад
Yep, I was thinking the same thing. Also, I think it’s strange that RISC technology was not mentioned in this video (or did I miss it?) It is what’s driving the superior low power high performance of Apple silicion based on ARM compared to the old CISC architecture still used in Intel and AMD CPUs. Of course, RISC cpus also found their way into Amigas many, many years ago in the form of the PowerPC architecture also found in Macs before Intel.
@Hexanitrobenzene
@Hexanitrobenzene 9 месяцев назад
@@oysteinvogt Intel and AMD are using RISC cores inside, but they translate the CISC code at decode stage. I'm waiting for adoption of RISC V. When I last checked, basically everyone but ARM was on board.
@yamishogun6501
@yamishogun6501 8 месяцев назад
The Amiga was released in July 1985, a year and a half after the Apple Macintosh. Byte magazine stated that the Amiga was better in 5 out of 6 categories but didn't have as nice a look for files navigation.
@darrennew8211
@darrennew8211 8 месяцев назад
@@yamishogun6501 The GUI was definitely not "pretty". But it was quite functional, and it's the system that made 99% of every internal really easy to understand and use. It's a shame you couldn't really make it multi-user without a radically different CPU design.
@alexzahnd2642
@alexzahnd2642 9 месяцев назад
Once again EXCELLENT video ! AND, a HUGE congrats to how much better understandable your videos are in recent months compared to previously, though the content was ALWAYS VERY GOOD! Thanks and keep the good work up!
@tonyennis1787
@tonyennis1787 9 месяцев назад
Back in the day, Moore's Law was that the number of transistors doubled every 18 months. So we're maintaining Moore's Law but simply changing the definition.
@ittaiklein8541
@ittaiklein8541 9 месяцев назад
TRUE! I definitely remember that. When she said 2 years, my 1'st reaction was: NOPE! Moore's Law states 1.5 years! That was repeated over & over. Just had an idea: Let's just change the definition of the law so as to fit actual progress. 😅
@NonsenseFabricator
@NonsenseFabricator 9 месяцев назад
He actually revised it in 1975
@ittaiklein8541
@ittaiklein8541 9 месяцев назад
@@NonsenseFabricator - could be . Nevertheless, I maintain that revised version did not garner the same popularity as did the "Law" in its original form. For I saw it quoted in the original form, way after 1975. But this debate here has outlasted it's importance, so I, unilaterally call it over. 🙂
@davideyres955
@davideyres955 9 месяцев назад
@@ittaiklein8541works for most governments!
@amentco8445
@amentco8445 9 месяцев назад
If you change the definition it's not a law, thus it died long ago.
@maxfriis
@maxfriis 9 месяцев назад
I was convinced that analog computing would be mentioned in such a comprehensive and detailed overview of computing. If you need to calculate a lot of products as you do when training an AI you can actually gain a lot by sacrificing some accuracy by using analog computing.
@traumflug
@traumflug 9 месяцев назад
Well, giving up deterministic results would open a whole can of other worms.
@maxfriis
@maxfriis 9 месяцев назад
@@traumflug Veritasium has an interesting video on the topic.
@BaddeJimme
@BaddeJimme 9 месяцев назад
@@traumflug You don't actually need deterministic results for a neural network. They are quite robust.
@ralphmacchiato3761
@ralphmacchiato3761 9 месяцев назад
​@@BaddeJimmezoom in enough and you'll find it's still deterministic but just not measurable.
@BaddeJimme
@BaddeJimme 9 месяцев назад
@@ralphmacchiato3761 I suppose you consider noise from quantum effects to be "deterministic" right?
@seadog8807
@seadog8807 9 месяцев назад
Great update Sabine, many thanks for your continued production of this content!
@ElvisRandomVideos
@ElvisRandomVideos 9 месяцев назад
Great video. I work in the semiconductor industry and you nailed every point from Graphene to EUV.
@jehl1963
@jehl1963 9 месяцев назад
Well summarised, Sabine! By coincidence, I'm about rhe same age as the semiconductor industry, and have enjoyed working in that industry (in various support roles) my entire life. I've been blessed to have worked with many bright, creative people in that time. In the end ,I hope that I've been able to contribute too. Kids! Spend the time to learn your math, sciences, and management skills, and then you too can join this great quest/campaign/endevor.
@sgttomas
@sgttomas 9 месяцев назад
I want to destroy your entire legacy 😂
@GlazeonthewickeR
@GlazeonthewickeR 9 месяцев назад
Lol. RU-vid comments, man.
@davidrobertson5700
@davidrobertson5700 9 месяцев назад
Summarised
@100c0c
@100c0c 9 месяцев назад
​@@GlazeonthewickeRwhat's wrong with the comment?
@GraemePayne1967Marine
@GraemePayne1967Marine 9 месяцев назад
@jehl1963 - I stongly agree with your last paragraph. A solid education in mathematics, the sciences, management (including process and quality management) will be VERY beneficial to almost any career path. Always strive to learn more. Never settle for being just average.
@iatebambismom
@iatebambismom 9 месяцев назад
I remember reading an article about how electrons will "fall off" chips with smaller than 100nm fabrication so it was a hard limit on fabrication. This was in the single chip x86 days. Technology is amazing.
@grizzomble
@grizzomble 9 месяцев назад
That's not far from where we are hitting the wall. The transistors in "8 nm" chips are about 50 nm.
@Nefville
@Nefville 9 месяцев назад
Oh do I remember those x86s, great chips. I once ran an 8086 w/o a heat sink or fan to get it to overheat and it simply refused.
@Furiends
@Furiends 9 месяцев назад
Tech news is some of the worse out there because it seems to be a technical authority using buzz words and seeming to parrot some amount of truth weaved into a more interesting narrative that just makes no sense. No electrons do not "fall off" chips.
@woobilicious.
@woobilicious. 9 месяцев назад
Oh they're definitely falling off, there's a reason why they use so much power and get so hot lol.
@johnbrobston1334
@johnbrobston1334 9 месяцев назад
@@Nefville Heat sink? On an 8086? I never saw such a thing. I did get my hands on a 20MHz '286 and it need a heat sink. Wasn't a very satisfactory machine though--there was apparently motherboard design issue on the particular board I had--I had to power it up, let it run a minute, and then hit reset before it would boot. Never did figure out where the problem was.
@ZenoTasedro
@ZenoTasedro 9 месяцев назад
This is a field I've worked in and it's interesting to hear you cover it. I definitely think that heterogenous computing is going to be what keeps technology advancing beyond Moore's law. The biggest problem I think is that silicon design has typically had an extremely high cost of entry for designers. Open Source ASIC design packages like the Skywater 130nm PDK enables grass roots silicon innovation. Once silicon has the same open source love as software does, our machines will be very different
@haldorasgirson9463
@haldorasgirson9463 9 месяцев назад
I recall the first time I ever saw an op-amp back in the 1970's. It was an Analog Devices Model 150 module that contained a discrete implementation of an op-amp (individual transistors, resistors and capacitors). The thing was insanely expensive.
@kittehboiDJ
@kittehboiDJ 9 месяцев назад
And had no temperature compensation...
@johndododoe1411
@johndododoe1411 9 месяцев назад
I've seen pictures of tube based op amp modules . Those handled temperature quite differently, needing actual built in heaters to work .
@Thomas-gk42
@Thomas-gk42 9 месяцев назад
Thankful for your work ❤
@adriendecroy7254
@adriendecroy7254 9 месяцев назад
Sabine, maybe you could do a video on the reverse of Moore's law as it applies to efficiency of software, especially OSes, which get slower and slower every generation so that for the past 20 years, the real speed of many tasks in the latest OS on the latest hardware has basically stayed the same, whilst the hardware has become many times faster.
@BradleyLayton
@BradleyLayton 9 месяцев назад
Yes please. Shannon's law?
@adriendecroy7254
@adriendecroy7254 9 месяцев назад
@@BradleyLayton lol perfect
@squirlmy
@squirlmy 9 месяцев назад
Yeah but, that OS model is kinda specific to PCs, servers and workstation. For example look into ITRON and TRON or really any Real-Time OS. I don't think OS's are getting slower with any sort of consistency. There's a lot of embedded applications where throwing in complete Linux distro is fast enough to replace a microcontroller with custom software. It's kind of the point of Moore's Law, that software bloat just doesn't matter. Some commercial stuff is bloating in response, but others aren't. And remote "software as service" up ends the models, too. Bandwidth becomes a lot more important than OS speed.
@afterthesmash
@afterthesmash 9 месяцев назад
Every OS function on the frame path of a popular video game has gotten faster, while all the rest has stagnated. Bling in bullet time by popular demand.
@snnwstt
@snnwstt 9 месяцев назад
Not sure that you compare the same things. A command line single task of the 70's is not a graphical interface of today, with tons of background services all active and in competition. Different "clients" are now targeted than it was the 70's. As for the goal, what is the interest to detect a mouse click one hundred time faster while the end user is as slow, if not slower, than ever? And if your need is a specific job, you may use an MCU, no OS, injected by a program itself developed and mostly debugged using a generic OS.
@user-zz5ns8st3p
@user-zz5ns8st3p 9 месяцев назад
Great update Sabine, many thanks for your continued production of this content!. a study-worthy episode Sabine, thanks for the diligence!.
@fro334bro
@fro334bro 9 месяцев назад
10:05 : Wow really good breakdown on the definitions of source, drain, gate and channel for a transistor. Best most clear (non-gobbledygook) definitions, examples and illustrations that I've seen anywhere for how a transistor works. The illustrations of how a 3D approach could work were really good too. Great stuff.
@johndavidbaldwin3075
@johndavidbaldwin3075 9 месяцев назад
I can remember seeing transistors at school, they were about 1.5 cm long and 0.5 cm wide cylinders with tree wires sticking out of one end. They were used for the first truly portable radios which could use zinc-carbon batteries to operate.
@markuskuhn9375
@markuskuhn9375 9 месяцев назад
Transistors for power applications remain that big, or bigger.
@chicken29843
@chicken29843 9 месяцев назад
​@@markuskuhn9375yeah I mean isn't a transistor basically just a switch?
@jpt3640
@jpt3640 9 месяцев назад
​@@chicken29843only true in digital world. You use transistors for signal amplification in analog world. Eg your hifi. In this case it's a continuum between on and off, thus not a switch.
@gnarthdarkanen7464
@gnarthdarkanen7464 9 месяцев назад
What I find funny is that they finally got the "first truly portable radios" out to market with a slogan proudly and boldly scrawled across the front "Solid state", most popular (in my area) with a 9-Volt box-shaped battery in each one... AND around a decade later, the teenagers were throwing their backs out of whack with MASSIVE "boomboxes" on their shoulders, failing utterly to appreciate the convenience of a radio that fit in your pocket and a pair of earphones to hear it clean and clear anywhere... to have "speaker wars" that threatened the window integrity two houses down the block for a "radio" that weighed as much as an 80 year old phonograph unit... ;o)
@egilsandnes9637
@egilsandnes9637 9 месяцев назад
​​@@chicken29843I'd say they generally are more like valves. If you always either turn your valve to completely stop the flow or open it fully, you've practically got a switch, and that's what's happening in digital circuits. There are a plethora of different kinds of transistors though, and for the most part the can be generalized as analog valves. One of the most common uses are signal amplifiers, used for audio and many other things.
@Weirdanimator
@Weirdanimator 9 месяцев назад
Correction! GPU was just a marketing term from Nvidia, dedicated graphics microchips have existed since at least the 70s.
@SabineHossenfelder
@SabineHossenfelder 9 месяцев назад
Ah, sorry about that :/
@hughJ
@hughJ 9 месяцев назад
ATI shortly thereafter (9700 series?) tried to one-up them by coining their product as the first ever "VPU" (visual processing unit, IIRC). Kind of funny seeing companies try to twist themselves into knots to explain how their product deserves an entirely new classification. I think part of Nvidia's "GPU" distinction at the time was that it's able to transform 10 million vertices per second, as if something is special about that number.
@majidaldo
@majidaldo 9 месяцев назад
She's referring to the consumer desktop *3d* gpus. The correction is that they were pioneered by 3dfx not Nvidia
@zigmar2
@zigmar2 9 месяцев назад
@@majidaldo There were consumer desktop 3D graphics processors even before 3dfx, like S3 Virge and ATI Rage; they were terrible though. Fun fact: The term GPU was first used by Sony for PS1's graphics processor.
@deth3021
@deth3021 9 месяцев назад
​@zigmar2 what about the matrix 2d, 3d, and 5d cards....
@electronik808
@electronik808 9 месяцев назад
Hi really great video. One clarification regarding chiplet design and 3d stacking. The chiplet design idea is to produce different wafers using different processes and then assemble them together later. This is efficient as you can optimize the process for the single parts, and use lower costs process uses for different part. This also can improve yeld as you get multiple chips that are smaller.
@rayc7192
@rayc7192 9 месяцев назад
Excellent channel. This would be mandatory viewing for any science class I would teach.
@MrLeafeater
@MrLeafeater 9 месяцев назад
I started with a Commodore 16 that plugged into the back of the TV. It was pretty horrifying, even at the time. Love your work!
@XmarkedSpot
@XmarkedSpot 9 месяцев назад
Shoutout to the channel Asianometry, an excellent first hand source for everything microchips and beyond!
@geoffreybrophy5959
@geoffreybrophy5959 9 месяцев назад
Thank you for the nice overview of coming technologies. Please continue covering these and future technologies as they evolve.
@arailway8809
@arailway8809 9 месяцев назад
Always good to see you, Sabine.
@tommiest3769
@tommiest3769 9 месяцев назад
An interesting paradox of the last 50 years is that although computational power has increased exponentially, breakthroughs in fundamental physics have essentially stalled. Outside of cell phones, our daily lives have not changed much since the 1980s or 1990s. Our ability to travel to space was actually better in the 1970s than it is today (they were going to the Moon and we aren't). I am still waiting for the moment when all of this incredible computational power will translate into commensurate advances in basic science.
@hayvenforpeace
@hayvenforpeace 9 месяцев назад
It won’t. Human nature is to funnel any advancements into profits for the super rich and the war machine. We will continue to stagnate until the species finally goes extinct due to ecosystem collapse and environmental devastation in a few decades.
@davidmackie3497
@davidmackie3497 9 месяцев назад
Depends on how you define "fundamental physics". All the computing in the world isn't going to find a 5th force of nature, if it doesn't exist. But finding and understanding the strong and weak forces hasn't changed society and perhaps never will. It's E&M, radiation detection, and computational algorithms + hardware that have brought us many new medical imaging techs (PET, 4D ultrasound, really good and fairly cheap CT and MRI scans). We have much better designs for nuclear reactors now -- if we'd only be allowed to build them. We have greatly improved solar cells. Digital cameras with higher resolution than film. Etc. Regarding space travel, we are way beyond the 1970s -- it's just that we are sending unmanned probes and telescopes now. And don't even get me started on what computation has done for biology.
@tommiest3769
@tommiest3769 9 месяцев назад
@@davidmackie3497 Yes, our space probes and telescopes have improved significantly in the last 50 years, but our space propulsion technology and our ability to place humans in space have not advanced during that time period. We are still using chemical rockets. And speaking of chemical rockets, there have been no improvements in the energy density of these chemical rocket fuels during that period. Speaking of nuclear reactors, nuclear fusion reactors are still "20 years in the future" as they have been for decades-- a running joke in the community. Even with our enhanced computational ability, nuclear fusion is still years in the future. Do you realize that you contradicted yourself? You said that "finding and understanding the strong and weak forces hasn't changed society and perhaps never will" and then moments later you talk about "better designs for nuclear reactors". Nuclear fission and fusion are based on the strong force and both of these processes obviously have had a huge impact on society since we truly understood them back in the first half of the 20th century.
@davidmackie3497
@davidmackie3497 9 месяцев назад
@@tommiest3769 Thanks, lots of good points. I would contend though, that designing better fission reactors doesn't require us to understand the weak and strong forces. It requires computer modeling of radioactive decay chains, heat transfer, etc., all of which were known (in principle) 50 years ago. As with fission reactors, the chief barrier to nuclear space engines is public outcry. People actually get mad about us "polluting space" -- and they're not talking about overcrowding of near-Earth orbit. They seriously believe it's morally wrong to send radioactive anything to Mars or Jupiter! Then when you point out the ludicrousness of their argument, they deflect to, "why are we going to space anyway?"
@tommiest3769
@tommiest3769 9 месяцев назад
@@davidmackie3497 You make some valid points about medical imaging and biosciences. I was responding to your comment about how finding and understanding the strong nuclear force has not changed society. Yes, NOW it won't help us develop better nuclear reactors, but nuclear reactors and nuclear weapons simply would not exist if we had not first reached a fundamental understanding of the strong nuclear force. Usually, radical breakthroughs, as opposed to incremental improvements, in our technological ability come about via revolutions in our understanding of nature on a fundamental level.
@jimsackmanbusinesscoaching1344
@jimsackmanbusinesscoaching1344 9 месяцев назад
There is another set of problems that are related but are a lot less obvious. Design complexity is a problem itself. It is difficult to design large devices and get them correct. In order to accomplish this, large blocks of the device are essentially predesigned and just reused. This makes is simpler to design but less efficient. As a complement to this is testing and testability. By testing, I mean the simulations that are run before the design is finalized prior to design freeze. The bigger the device the more complex this task is. It is also one of the reasons that big blocks are just inherited. Testability mean ability to test and show correctness of production. This too gets more complex as devices get bigger. These complexities have led to fewer new devices being made as one has to be very sure that the investment in developing a new chip will pay off. The design and preparation for manufacturing costs are sunk well ahead of the revenue for the device. And these costs have also skyrocketed. The response to this has been the rise of the FPGA and similar ways of customizing designs built on standard silicon. These too are not as efficient as one could have made them with a fully custom device. They have the advantage of dramatically lowering the upfront costs.
@johnhorner5711
@johnhorner5711 9 месяцев назад
Thank you for another fascinating video. I have a few nits to pick. 1) Back in 2018 Global Foundries stated that they were abandoning their 7 nM technology deal with Samsung and would no longer attempt to compete at the very highest levels of semiconductor technology, but rather would focus on other markets. Thus there are only the three (TSMC, Samsung and Intel) companies still competing at the leading edge, and of these Intel is already well behind the curve while TSMC is leading. 2) Heterogeneous computing does not extend Moore's law. It is a back-to-the-future adaptation to the increasing cost of moving to ever higher transistor densities. In many ways it is a re-discovery of the advantage of chips being highly customized for particular tasks.
@viralsheddingzombie5324
@viralsheddingzombie5324 9 месяцев назад
Diodes are primarily on/off switches. Transistors are amplification and flow control devices.
@casnimot
@casnimot 9 месяцев назад
Yes, Moore's Law can continue for a while through parallelism. Heterogeneous approaches are working now and we're fairly close (I think) to practical use of resistor arrays for memory and compute (memristors). AI approaches to patterns of execution, and how that affects CPU, I/O and RAM, will also likely bear considerable fruit since we've spent quite a bit more time on miniaturization than optimization. In gaming, we're looking at a time of what will feel like only linear improvements in experience. But they will continue, and then pick up as we grow into new combined compute/storage/IO materials, structures and strategies. Costs and power consumption will be the real bear, and we MUST NOT leave solving such problems to the likes of NVIDIA (from whom I will purchase nothing). True competition is key.
@rredding
@rredding 9 месяцев назад
My first computer was an Apricot PC, in 1984. A 16-bits system that was compatible with the Sirius 1. It was promoted in fora as WAY better than the IBM, DOS system not compatible with IBM. The screen was high resolution 800*400. *The issue: it was not compatible with IBM, so hardly any software became available. 😭
@roccov3614
@roccov3614 9 месяцев назад
Saying we are coming to the end of Moore's Law is like saying we know everything there is to know. There will always be more to learn and newer technology to advance Moore's Law.
@jefferychartier2536
@jefferychartier2536 9 месяцев назад
this is an amazing topic, thanks for posting.
@ppst5524
@ppst5524 9 месяцев назад
Globalfoundries does not have the most advanced CMOS technologies. They left the race in 2018, so we are down to 3 companies.
@SolidSiren
@SolidSiren 9 месяцев назад
My first computer was a commodore 64. My dad gave it to me at age 5. I LOVED it. He taught me how to check the drive, see what's on a floppy disk, and how to launch games on it =)
@chriswondyrland73
@chriswondyrland73 9 месяцев назад
Great sum-up, as always!
@zachhoy
@zachhoy 9 месяцев назад
a study-worthy episode Sabine, thanks for the diligence!
@AndrewMellor-darkphoton
@AndrewMellor-darkphoton 9 месяцев назад
It feels sporadic and outdated
@zachhoy
@zachhoy 8 месяцев назад
I respect your opinion but disagree@@AndrewMellor-darkphoton
@eonasjohn
@eonasjohn 9 месяцев назад
Thank you for the video.
@MCsCreations
@MCsCreations 9 месяцев назад
Thanks a bunch for all the info, Sabine! 😊 Stay safe there with your family! 🖖😊
@talkinghat88
@talkinghat88 9 месяцев назад
Thank you. You have always made difficult names easier to understand. 👍
@barryon8706
@barryon8706 9 месяцев назад
My family's first computer was a Commodore 128. I heard that the CP/M mode was used more in Europe than here in the U.S. I still miss that machine.
@peggyesterhuizen4207
@peggyesterhuizen4207 9 месяцев назад
Ah, when computers were tools and not decision tree minefields
@ArtisanTony
@ArtisanTony 9 месяцев назад
Thank God! I outlived Moore :) My first computer was a 1982 model Compaq luggable that had 2 - 5 1/4" floppy disks. It had a 9" amber screen. I traded it in a year later for the same computer but this one had a 20 mb hard drive. I was a big timer in 1982 :)
@tarmaque
@tarmaque 9 месяцев назад
Hahaha! That reminds me of when I scored a hard drive for my first Macintosh in the late 80's. Back then the trick was to shrink the Mac OS down enough so you could get both the system and Word on one floppy, hence not having to swap out floppies constantly. Then I got this shiny new _12 mb_ hard drive! Bliss! Not only enough room for the full OS and Word, but a few dozen documents. It was SCSI, huge, noisy, and I loved it. I held onto it for years simply for the novelty factor, but finally it got lost.
@ArtisanTony
@ArtisanTony 9 месяцев назад
@@tarmaque Yes, I am mad for getting rid of my luggable lol
@billirwin3558
@billirwin3558 9 месяцев назад
Ah yes, the days when computers were almost comprehensible to its users. Anyone for machine language Ping Pong?
@ctakitimu
@ctakitimu 9 месяцев назад
You're so lucky to have had a Commodore 128! We started on a Sinclair ZX Spectrum before upgrading to a Commodore 64 (C64). Eventually upgrading to an Amiga 500. Then later came exploring how to build XT/AT systems, then 286 etc.
@Patrick-il4es
@Patrick-il4es 8 месяцев назад
Excellent research and presentation!
@hosermandeusl2468
@hosermandeusl2468 9 месяцев назад
My first computer was a Commodore Vic-20, later got a '64. Then moved to an IBM PCC 2000 - playing with power!
@anttikarttunen1126
@anttikarttunen1126 9 месяцев назад
I first read it as "How Dead is Murphy's Law?", and wondered how Murphy's law could ever be dead!
@Anerisian
@Anerisian 9 месяцев назад
…when you least expect it.
@bobjones2041
@bobjones2041 9 месяцев назад
You put it in box and it could be alive or dead just like cats in a box too
@ForeverTemplar
@ForeverTemplar 8 месяцев назад
@@bobjones2041 Wrong. Murphy would say the smart money is the cats are dead.
@bobjones2041
@bobjones2041 8 месяцев назад
@@ForeverTemplar they was before Murphy had gender reassignment surgery and took up toking weed
@johnfraser6013
@johnfraser6013 9 месяцев назад
Thank you Sabine ~ very clear and concise discussion. 👍👍
@TomTschritter
@TomTschritter 9 месяцев назад
incredibly informative, compelling communication and some deep deadpan
@mihan2d
@mihan2d 9 месяцев назад
It is crazy, the phone I'm typing this with has the same amount of memory as my 10 year gaming laptop which can even now run some modern-ish games. Rest in peace Moore's law, you've done great things for progress.
@aniksamiurrahman6365
@aniksamiurrahman6365 9 месяцев назад
This isn't actually crazy. Cos, both has the same amount of miniaturization.
@yeetdeets
@yeetdeets 9 месяцев назад
The size of the memory is only one factor though. Other factors include long term health of the storage and speed of read/write.
@mihan2d
@mihan2d 9 месяцев назад
@@yeetdeets I meant RAM actually. But yeah the SSD I installed myself into the thing is only 128 GB and it's also the same as my Xperia phone
@howardjames1191
@howardjames1191 9 месяцев назад
I used to feel in touch with modern technology and understood the science explaining how or why it worked, so why do I suddenly have the impression that I dozed off waking to find it's all become some kind magic trick, that only a few magicians can explain or understand.
@comet.x
@comet.x 9 месяцев назад
because things are now made by a bunch of different specialists from different fields collaborating, and none of them knows what the others know. this leaves the very few who understand all the specializations being the only ones who can explain it
@TargetSurvival
@TargetSurvival 9 месяцев назад
Thankyou, Thankyou, Thankyou, just discovered your channel and your video's. SO much information for me to absorb, bless you for this.
@amphibiousone7972
@amphibiousone7972 9 месяцев назад
Great Lecture Sabine, thank you. I found myself imagining a smartphone being built useing 1970 components. The size of the structure would be incredible, as would its power consumption. No I didn't do the math....it was a quick walk on Premise Beach. Good Stuff 🙏
@phoule76
@phoule76 9 месяцев назад
We're not getting old, Sabine; we're becoming legacy.
@fatguy321
@fatguy321 9 месяцев назад
“Leagacy”
@cbrew8775
@cbrew8775 9 месяцев назад
@@fatguy321 id like to buy vowel, a contanty thingish.. shit stupid keyboard
@michellowe8627
@michellowe8627 9 месяцев назад
My first computer was a Motorola M6800 design evaluation kit, a 6800 micro processor, serial and parallel I/O chips, a clock, and a ROM monitor memory. It had a 48k memory board, a surplus power supply scavenged from a scrapped out minicomputer and I used my TV as a glass teletype.
@RobertWGreaves
@RobertWGreaves 9 месяцев назад
My first computer was a radio shack pocket computer, then I bought a radio shack color computer. Next came a Kaypro II. That was when I started to get serious about programming and getting them to “talk” to each other. Today as an audio engineer, all the audio work I do is done on a computer with various digital peripherals. Now that I am retired I am still amazed at the advances being made in the computer world.
@d2d319
@d2d319 9 месяцев назад
AMAZING channel! Your videos are detailed and well explained, your jokes are funny and your accent is cool! ❤
@AICoffeeBreak
@AICoffeeBreak 9 месяцев назад
Excellent video on a very important subject. I'm in awe about the amount of research that went into this one.
@adriencutivet9239
@adriencutivet9239 9 месяцев назад
She gets a lot of things wrong unfortunately
@AICoffeeBreak
@AICoffeeBreak 9 месяцев назад
@@adriencutivet9239 I'd be interested to know with what exactly.
@douglasmatthews2334
@douglasmatthews2334 9 месяцев назад
Sabine! You made my morning with your upload. Ty for explaining things in simple terms I can understand. Ty for all of your videos!
@JustinCase-wy1wm
@JustinCase-wy1wm 9 месяцев назад
Hello, small correction. The method which is used for defect inspection is (Scanning) Electron Microscopy, not Electron Beam Lithography. SEM is for imaging, EBL is used for resist patterning and subsequent structuring, usually in special cases were Photolithography does not provide sufficient resolution.
@__-tz6xx
@__-tz6xx 9 месяцев назад
12:14 I heard of this cooling method on Linus Tech Tips and hearing how well it works makes me stoked. I hope to see cooling like this in future consumer chips.
@brianmason9803
@brianmason9803 9 месяцев назад
Moore's law was never a law. It was an observed trend that manufacturers turned into an obsession. It caused an escalation of software that we will come to realise has made many systems unwieldy and unstable.
@michaelblacktree
@michaelblacktree 9 месяцев назад
Moore's Law isn't really a law, just like the Drake Equation isn't really an equation.
@ernestgalvan9037
@ernestgalvan9037 9 месяцев назад
“Moore’s Law” rolls of the tongue a lot more smoothly than “Moore’s Observed Trend” 😎 P.S. she DID comment about the nomenclature/
@crocothemis
@crocothemis 9 месяцев назад
Yes, so she said. Murphy's law isn't a law either.
@jamesdriscoll_tmp1515
@jamesdriscoll_tmp1515 9 месяцев назад
My observation has been the effort and ingenuity that amazing individuals have shown working to make this reality.
@luelou8464
@luelou8464 9 месяцев назад
People always talk about it like Gordon Moore was some random intellectual, the guy confounded Intel. It was originally Moore-Noice until they realised that sounded too much like more noise. Half the reason it ended up holding true for so long was because Intel used it as an internal target. Allegedly they sometimes even withheld advancements to fit Moore's, otherwise they wouldn't be able to make there own products obsolete quick enough.
@TLguitar
@TLguitar 9 месяцев назад
I'm only at the beginning of the video so I'm not sure this wasn't addressed later, but I read that the "5nm", "3nm" etc. nomenclature of modern processors is actually just marketing and it doesn't indicate the actual size of their transistors, which in effect is currently considerably larger. So perhaps on the upside we are still quite far, definitely more than it might seem on paper, from the theoretical 250 picometer or so size limit using silicon atoms (although I assume, without having finished the video yet, this small of a scale would be unusable in standard processors due to quantum effects).
@HisBortness
@HisBortness 9 месяцев назад
Tunneling seems to become a problem at far larger scales than that.
@FilmsSantaFe
@FilmsSantaFe 9 месяцев назад
I think it refers to some type of feature size, like the characteristic dimension of the conductive paths or such, please someone correct me.
@hughJ
@hughJ 9 месяцев назад
@@FilmsSantaFe It used to be, but I think that went out the window over a decade ago. Probably around the time we switched from planar cmos to finfet.
@andrewsuryali8540
@andrewsuryali8540 9 месяцев назад
The nomenclature is pure marketing, but not because the transistors are much larger. The transistors themselves are within the same order of magnitude in size as the nomenclature indicates, so it isn't that a 3nm transistor is 1 micron in size. What makes the nomenclature pure marketing is that it refers to the smallest "cut" they can make on the die. This means that there might be some part of the transistor that is roughly 3nm in size, but it's only that one part and nothing else. Furthermore, which part it is depends on the company, and in the past Intel has been more "honest" about which parts can be claimed to be the size their nomenclature indicates. With all that said, once we get to the amstrong scale, the nomenclature will become pure fantasy.
@TLguitar
@TLguitar 9 месяцев назад
@@andrewsuryali8540 I don't know much about the actual architecture of such processors, but it is stated in different sources that the gates (which I assume are the points where current, or data, is transferred between transistors) in the CPU are about 10 times as large as the _nm_ generation. So considering Moore's Law spoke of doubling the number of transistors every two years, if we coincide that with doubling of density then their marketing induces to a theoretical 6-7 year gap between our actual technological standing and what might be only in future iterations.
@ivanerofeev4873
@ivanerofeev4873 9 месяцев назад
4.12: e-beam lithography is a way of making patterns on the wafer, what you describe is scanning electron microscopy (SEM). People also use atomic force microscopy (AFM), but both mainly during the development stage and random quality checks, not in the production of every single device
@jonwesick2844
@jonwesick2844 9 месяцев назад
My first computer was a Commadore 64. I used it to write a program that scheduled experimenters' shifts at the TRIUMF particle accelerator.
@douglasstrother6584
@douglasstrother6584 9 месяцев назад
Nice! The C-64 rocked!
@AdityaMehendale
@AdityaMehendale 9 месяцев назад
10:22 you missed out on a Gandalf-opportunity :)
@SabineHossenfelder
@SabineHossenfelder 9 месяцев назад
Dang!
@thepicatrix3150
@thepicatrix3150 9 месяцев назад
We need a new Sabine song to tell us about the good scientific discoveries in 2023 and make fun of the BS
@gstlynx
@gstlynx 9 месяцев назад
You are a great presenter and I love your humor. Thanks.
@TrondBrgeKrokli
@TrondBrgeKrokli 9 месяцев назад
Thank you for mentioning all the options. At the start, I was wondering if photonic computing still only exists in a futuristic theory, but at least some research already exists.
@paxdriver
@paxdriver 9 месяцев назад
Sabine, how do you release a video on IAI the same day as your channel? You're so prolific I have trouble keeping up with everything you publish 😂 Great problem to have, thank you for all your hard work. Communicating science without dumbing it down is so crucial in this day and age. I hope you realize how much you inspire and help people learn to learn.
@Thomas-gk42
@Thomas-gk42 9 месяцев назад
Haha, yes thank you in SH's name for your comment, had the same problem yesterday. Great appearance of her in that talk again, wasn't it? Do you know when and where this event happened? Couldn't find any notice.
@paxdriver
@paxdriver 9 месяцев назад
@@Thomas-gk42 not at all I just saw stumbled across it as this video was on and noticed the release time was hours apart lol
@1873Winchester
@1873Winchester 9 месяцев назад
We have a lot of improvements to be done at the software level too. Modern software is so bloated.
@hellomeow9590
@hellomeow9590 9 месяцев назад
Yeah, a lot a lot. Like, several orders of magnitude. A lot of modern software is built on layers upon layers of inefficient code.
@yeetdeets
@yeetdeets 9 месяцев назад
Even if Moores law ends, compute will likely still become progressively cheaper due to production scale, as will energy due to new technologies. So society rewriting a bunch of the bloated software seems unlikely. I actually think there is/will be a small gold rush of utilizing the cheap compute in new areas.
@danielhadad4911
@danielhadad4911 9 месяцев назад
Like all the software that comes with your printer and serves no purpose.
@johnjakson444
@johnjakson444 9 месяцев назад
Another chip guy here. The transistor symbol at the start should have used the MOS FET gate even though the O for Oxide isn't used anymore. NPU used to mean Network Processor Unit, but Neural is so much more interesting. The real problem in my view is the Memory Wall which forces CPUs to have ever deeper cache hierarchies and worse performance when the processor is not making much use of the massively parallels capabilities. Cold starting PCs today is not significantly faster than 40 years ago, 30s vs 100s or so, but many capabilities are orders of magnitude higher. If we had a kind of DRAM that had say 40 times as much random throughput (called an RLDRAM), processor design would then be an order or two simpler but coding would have to be much more concurrent. Instead of using the built in SIMD x86 op codes to write Codecs, we would explicitly use hundred of threads to perform the inner operations. We then have a Thread Wall problem. And instead of including DSP, NPU, GPU like functions to help the CPU, we would write the code to simulate those blocks across very large number of simpler but also slower CPUs. But that possibility has slipped away. I could also rant on about compilers taking up 20-50GB of disk space when they would once fit onto floppy disks. My code used to compile in 2minuts is now maybe a few secs, that speed up does not jive with the million fold transistor counts needed to do the job. And software complexity..... I think we all jumped the shark here.
@haldorasgirson9463
@haldorasgirson9463 9 месяцев назад
On the intro, my first PC that connected to a monitor (TV) was a Commodor VIC 20 that came with 5KBytes of RAM. I upgraded the RAM to 32 KBytes myself. What a power house. An 8 bit 6502 processor running at 1 MHz. I taught myself how to program on that VIC 20 so it was worth it to me.
@Annpatricksy
@Annpatricksy 9 месяцев назад
Hey guys, I know nothing about the market and I'm looking to invest, any help? As well who can I reach out to?
@Achakkate
@Achakkate 9 месяцев назад
stock market rally run is gone, but I'm not sure if equities will swiftly recover, keep falling, or fluctuate in a narrow range for a few weeks, or if things will quickly get worse. I'm under pressure to increase my $300k reserve.
@JethroChayton
@JethroChayton 9 месяцев назад
That's true. Its really needful for beginners not to settle for videos alone or they will see themselves losing all their money just like me when I newly started trading with this videos here on RU-vid
@StevenProctor-xr6pp
@StevenProctor-xr6pp 9 месяцев назад
I agree with you, With his help, I diversified my 450k portfolio among different markets. During this bearish market period, I was able to produce a net profit of little over $1 million from high dividend yield stocks, ETFs and equity. However, the reality is that you cannot do it without a tried and true trading coach like Julio Castillo
@AliciaAdahy
@AliciaAdahy 9 месяцев назад
He's awesome he has managed my investment so well and my weekly returns are mind blowing. Could someone kindly leave his details here?.
@admin....1727
@admin....1727 9 месяцев назад
+346
@herculesrockefeller8969
@herculesrockefeller8969 9 месяцев назад
"...that's more effort than most consumers are willing to put into a text message!" 😅 Thank you, Sabine, for another well done and interesting episode.
@ckmishn3664
@ckmishn3664 9 месяцев назад
1:47 Not to be pedantic, but what he noticed (and what your graph shows), was that the number of transistors was doubling every year. He later amended his observation to every 2 years as things were already slowing down.
@TheMg49
@TheMg49 9 месяцев назад
Fascinating engineering. I remember the olden days. Seen some amazing advances. Thanks for your vids. Thumbs up!
@Harkmagic
@Harkmagic 9 месяцев назад
There were concerns about this way back in 2000. Miniaturization has slowed dramatically. Increases in processing power has been largely the result in improvements in parallel processing. The early 2000s had faster processor than we do today. Instead, we now get more done with several parallel processes via multithreading.
@brainthesizeofplanet
@brainthesizeofplanet 9 месяцев назад
Still the NM race will hit a wall ant around 2nm
@marcossidoruk8033
@marcossidoruk8033 9 месяцев назад
"The 2000s had faster(single threaded) processor than we do today" That is demonstrably wrong, totally ridiculous statement, at least if you are referring to non consumer research products thay I am not aware of that is just *extremely* untrue.
@MaGaO
@MaGaO 9 месяцев назад
​@@marcossidoruk8033 I remember Pentiums hitting the wall at around 3.4GHz. Current Intel/AMD CPUs don't clock faster than that IIRC in single-thread mode even if they are more capable.
@aniksamiurrahman6365
@aniksamiurrahman6365 9 месяцев назад
Unfortunately no. The increase of computational power has slowed down even more than the slow down of miniaturization. Just because, we have more computational power than we need doesn't matter computational power is increasing in the same rate as before.
@marcossidoruk8033
@marcossidoruk8033 9 месяцев назад
​​@@MaGaO First, That is a high end CPU of the late 2000s overclocked, Most mid range CPUs today have that as base frequency and can be overclocked to 3.6 Ghz or more, high end cpus can reach higher frequencies than 4 or 5Ghz in peak performance. Second, higher frequency doesn't imply a higher single core performance at all, cache sizes matter *a lot* for memory intensive programs and even in compute bound programs most actual instructions are faster in todays CPUs, modern CPUs absolutely smoke old ones even when running at the same frequency and on single core except maybe when executing a particular instruction that happens to be slow (I remember right shift instructions being awfully slow in some modern intel processors, also MMX instructions are deprecated and made to be deliberately slow so those will be slower as well) and if you add to the equation modern SIMD instructions like AVX512 its not even remotely close, a modern CPU will easily 4x a relatively old one or worse. The reason for all of this is that CPU manufacturers are not stupid, single threaded performance matters since not every problem can or should be paralelized, and they are constantly trying to improve that. The only difference is that in the past the main bottleneck was floating point operations, today those are blazingly fast and algorithmically very hard to make faster, so most of the optimization efforts go to better cache architectures (also matters for multithreading) better branch prediction, more, more flexible ALUs, better pipelining and out of order execution, etc.
@terrymichael5821
@terrymichael5821 9 месяцев назад
The ENIAC (Electronic Numerical Integrator and Computer) was invented by J. Presper Eckert and John Mauchly at the University of Pennsylvania and began construction in 1943 and was not completed until 1946. It occupied about 1,800 square feet and used about 18,000 vacuum tubes, weighing almost 50 tons.
@oobihdahboobeeboppah
@oobihdahboobeeboppah 9 месяцев назад
I always appreciate Sabine's lectures here on YT. Her delivery is scaled back to a level that even I can understand her material. We're close in age (I'm still older) so her references to past tech hits home for me. Moreover, her accent (from my American ears) isn't so strong that I can't follow her; I get a kick out of hearing the different ways some words are pronounced, I guess it's we Americans who are saying things wrong. Some of her topics are over my head even though I've been in tech all my life, and if I find my brain drifting off, her very pleasant demeanor and style reels me back in. Her contributions to teaching science are second to none!
@DanH11
@DanH11 9 месяцев назад
This was my first Sabine Hossenfelder video, just stumbled on it tonight. I love your dry humor sprinkled throughout. Definitely subscribing for more cheeky tech videos.
@TheOneTonHammer
@TheOneTonHammer 9 месяцев назад
GPU's were pioneered by 3DFX, Matrox and ATI. Nvidia came later.
@ThePowerLover
@ThePowerLover 9 месяцев назад
But Nvidia bought 3DFX.
@TheOneTonHammer
@TheOneTonHammer 9 месяцев назад
@@ThePowerLover Correct, but the statement was "...pioneered by". nVidia didn't poineer discreet GPU's.. they purchased after the fact.
@ThePowerLover
@ThePowerLover 9 месяцев назад
@@TheOneTonHammer But when you buy something, you buy its history.
@TheOneTonHammer
@TheOneTonHammer 9 месяцев назад
@@ThePowerLover True, but her statement was based on time. Pioneering something means you're the first. ATI created the first GPU in 1987. nVidia didn't even exist until 1993.
@Jabjabs
@Jabjabs 9 месяцев назад
Covered all the bases fairly well. This slow down in progress is why a lot of manufacturers are teaming up with OS developers to provide security/functionality integration. Things like Windows 11 requiring set security processors means it only work on hardware at most 4-5 years old is a good example. I expect this to be a growing trend in future. Without that requirement there is the chance that we could have computers that are over a decade old that are still fairly usable even if not the fastest. Maybe this is how Linux will get any foot hold on the desktop? Don't know, but we shall see.
@hughJ
@hughJ 9 месяцев назад
My parents are running on my hand-me-down Nahalem box running Windows 10. I'm morbidly curious to see what'll happen first: The PC dies, my parents die, or I'm forced to have to try and teach them Linux.
@peterasamoah8779
@peterasamoah8779 9 месяцев назад
I love this video!!! Thank You Sabine ☺️🤩🫶
@crawkn
@crawkn 9 месяцев назад
While many of the new ideas in alternate materials and design concepts are promising, few of them are very near to production, so we need to focus on those factors which are easier to improve. Since waste heat is a significant problem, and battery life requires low power consumption, we need to work first on efficiency. Cooling isn't really a complicated problem, we just have to accept that it's needed, and design it in. Heat removal isn't exactly high tech, it just needs miniaturization. Designing a solid state sealed micro-heat-pump seems fairly straightforward.
@rickarmstrong4704
@rickarmstrong4704 9 месяцев назад
Sabine I am reminded of Tetris ( the stacking of transistors ) first and Edison and the search for something for the lightbulb filament second ( searching out another material for making the transistors ) or the always ten years off commercialization of fusion, I do however give Moore's law better odds : ) Thank You Sabine!
@extremawesomazing
@extremawesomazing 9 месяцев назад
I breathe a sigh of relief as I'm reassured that my dream of smart artificial nails will some day allow the world to watch Sabine on literal thumbnails and give her videos a literal thumbs up.
@neovi6424
@neovi6424 9 месяцев назад
Moore's law only exists for gpu pricing, guranteed to double every 2 years lol
@joetuktyyuktuk8635
@joetuktyyuktuk8635 9 месяцев назад
My families first computer was a Commodore-64. Me and my brother would spend a day typing in code, from a magazine in order to have a rudimentary game to play. One comma out of place and the game wouldn't run and you had to go through everything line by line... ah, good times.
@craign8ca
@craign8ca 9 месяцев назад
The video shows the schematic diagram of a bipolar transistor. Later in the video, a drawing of a MOSFET is shown (metal oxide semiconductor -- field effect transistor). I know, It's just a minor detail, but the schematic diagrams are different. Even so, I always find your videos educational and entertaining. Thank you for making science understandable for all.
Далее
Quantum Computers Could Solve These Problems
23:42
Просмотров 293 тыс.
What does "Intelligence" mean anyway?
20:20
Просмотров 285 тыс.
Lablaringdan chaqib olaman🐝
00:30
Просмотров 166 тыс.
Moore's Law is Not Dead (Jim Keller) | AI Podcast Clips
26:02
How are Microchips Made?
27:48
Просмотров 119 тыс.
How bad is Diesel?
19:11
Просмотров 722 тыс.
Why Moore’s Law Matters
23:41
Просмотров 235 тыс.
Is Time Travel Possible? Here's What Physics Says.
17:40
The AI Hardware Problem
13:26
Просмотров 526 тыс.
Wolfram Physics Project Launch
3:50:19
Просмотров 1,5 млн
Interstellar Expansion WITHOUT Faster Than Light Travel
21:14
ОБЗОР Pixel 8a | iPhone от ГУГЛ👀
0:33
Просмотров 18 тыс.