I think the silicon industry is the equivalent of the modern space race. It just goes to show what insane advancements science can make with collaboration and a good amount of financial incentive behind it. It never ceases to amaze me how much further we can push this single element and I'm curious what will lie beyond the silicon lands.
"Im curious what will lie beyond the silicon lands": I think this will give you an Idea: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-D--sSNKiVXg.htmlsi=duN1EHTAW4_9dpsg
this guy is just insane, being a Mat sci PhD student, I can resonate to how many hours of research and literature review goes into making one of these videos. Keep up the good work bro!
@@nostro1940 they didn't blow up your hut in the middle east yet, abu bakr? Guess you have a few more days with your goat. The pig is happy with your mother's services as well.
@@mindnova7850Doing so much with so much less is all about diminishing returns. The most influence on performance give parallel instruction execution that has hard limits due to data and control path dependencies in the programs. While core performance is no better than frequency multiplied by IPC. Frequency does not grow well. 2000 year CPUs had x5 less frequency than now. So the most performance gain had to cone from IPC grow. Was it at least x4 or worse since 2000 year?
It's magic because it's proprietary and has less than 20% patents published, while 80% is simply under NDA. Then to send to this magical machine proper GDSII you HAVE TO pay $80,000 per SEAT each year to Cadence and Synopsys private IP mafia.
@@MicroageHD Well that's the last German engineering pride being left. Auto industry is in shambles, except the big three premium brands and Porsche the rest will go bankrupt soon, all the nuclear plants closed, no cheap gas and energy any more, you are finished. And you know, making incredibly smooth mirrors is not a rocket science. Chinese can already do it. They still don't have the rest of proprietary algorithms for control from ASML, but give them a few years and you'd be surprised when ASML is not a monopolist any more.
I speak native Spanish and I couldn’t make the reference until he mentioned it tbh. Probably because I only watch content in English and Spanish pronunciation doesn’t cross my mind ever
This channel has given me a passion for photolithography. I'm proud to say that as a second year student I'm currently interviewing with onSemi, microchip, synopsis and MKS thanks to the interest you sparked in me for this field
@@atharvabedarkar I think you need to look up the definition of the word "formerly". Here, I'll use it in a sentence for you to help "Intel was FORMERLY the leader in the industry. Now all the best people that FORMERLY worked at Intel all work somewhere else"
This is by far the best video from Asianometry. I thought the videos on High-NA EUV would have already been challenging, but looks like the bar is now raised even higher. Kudos to the great work. Looking forward to more videos with such high quality content.
For a couple of years I had been scratching my head trying to figure out how feature-sizes could be so much smaller than the wavelengths of light, even "extreme" UV. I met a guy in a brewpub that works for Nvidia who helped get some understanding but this added a bit more. Still learning. Thanks.
I appreciated your closing philosophical statement about using computers to design chips to make faster computers. It reminded me of the old "if you could make a machine that replicates itself but half the size, and the resulting machine does the same, how far could that recursion progress until the resulting machine is too small to work?"
This video is a great introduction for laymen into the history of litho for wafer fab. Still a but technical on the jargon, but accessible to most who would pay attention.
Computers designing computers was the concept behind numbering "generations" of computers: 1st gen, vacuum tubes; 2nd gen: discrete semiconductors; 3rd gen SSI scale integration; 4th gen: LSI. Very self-referential. By now, it must be 20th gen or something.
I think we're still at 5th gen , 4th one should be VLSI, 5th being chiplets, which is work in progress... 6Th should be when were completely surrounded by computers in every day life in every tool we use... who knows...
I was taking a VLSI course back in the early 80's, and for a class project had to write and use a LISP design rule checker. At work the lab next to me had a DEC minicomputer. I asked for several seconds of CPU time to verify my chip design. It actually took several hours of CPU time. Not exactly lithography, just a lesson about how much computational power is required to design even a VLSI sized chip.
Or perhaps your algorithm just wasn't very efficient? :) Funny that in the 80's taking a VLSI course meant writing your own DRC. When I took it (~2008) it was doing full custom transistor layouts, schematic capture, and a lot of timing analysis and spice simulations. Also measuring (in simulation) static and dynamic power consumption. Seeing as the cad tools haven't changed much since then (at least at the level of what's used in an intro class), I doubt it's changed much since I took it.
Interesting comment about using computers to make computers faster. In 1978 my first job was making circuit boards for Motorola 6802 based industrial computer. It controlled a retrofit to a K&S 478 wirebonder making it automatic by targeting 2 points on the die and pushing the button. We had no drill for to make the PCBs so we took the computer we made and bolted on so big stepper motors and turned the wirebonder into a drill for the circuit boards. A vacuum cleaner was the dust collector. Our customers when the toured the factory would stop at this homemade hack of a machine, and ponder. Many commented "wow, the machine is making itself!" Indeed....
It's worth mentioning that this will not have any kind of major impact on the actual throughput of the production of the wafers. There were some channels that saw this Nvidia breakthrough news and got carried away with the idea that throughput would be 10X faster or whatever. This will help speed up design, the throughput is still the same.
Mathematically, a lot of these computations are numerical inversions. OPC could be considered a crude numerical inversion depending on how it's done exactly. Typically, you have inputs and a mathematical model that gives an output. In an inversion, you know the output, but either the input or model is unknown. If the initial paper used simulated annealing, the computations most likely minimize the L2 norm, which is a classical numerical inversion technique. I imagine they have been predominately using Bayesian inversion in ILT. That gets very computationally intensive when you add complexity to Bayesian Inversion, but it is more flexible, and uncertainty quantification is given because the method involves probability distributions. I don't know what CuLitho uses, and it's hard to verify on the internet. I was thinking of AI Inference because NVIDIA has been using ML in GPU design recently. There are so many inverse problems in science. If it's a remote measurement or you are trying to determine the interior of an object from an exterior measurement, then it's most likely an inverse problem.
The more I learn about chip making from this channel the more I wonder if I might have been interested in the microelectronics engineering major over my computer engineering major. its all so fascinating, but I must stay faithful and true to my computer architecture & organization... and I don't think I could handle the circuits maths lol
Hope a video on chip architecture can be made. There was a video from CNBC (if I remember correctly) on ARM and how it moved forward when it's acqusition by NVIDIA was stopped by regulators.
This is how I make sense of it. The fact that a photon is emitted from an individual atom implies the theoretical resolution limit of masks could reach the size of an individual atom regardless of the wavelength. (This assumes the etendue of the light source is the size of an individual atom). Working backwards, the photon wavefront emanating from the source atom must be duplicated at the absorbing atom. This is essentially what the mask design is trying to do. It’s more like a hologram than a stencil image being focused on the wafer. The wavefront reaching a point on the wafer must reach that point in phase from as large a solid angle as possible.
I just want to point out that depth of focus and depth of field are not the same.... at least in photography. Depth of focus relates to behind the lens, and depth of field relates to infront of the lens. A longer focal length with shallower depth of field will have a wider depth of focus.
Reminds me of the technique used for inter-chip communication on computer motherboards. Basically two chips "ping" each other and "learn" how to distort their own output so the receiving end gets the cleanest signal possible. -> Introduce distortions at the source so they overlap/interfere among themselves over the conductors path so the received signal is clean. It was probably Intel who developed this technique but I cant remember its name...
I feel a bit sad for Luminescent Technologies. Coming up with revolutionary idea 2 decades too early. Also I'm worried that it might cement NVIDIA's monopoly in some segments of GPU use (in this case I assume it's something like HPC). I really hope that not only Radeon Technologies, but also a lot of 3rd party companies are working on their own patents. And when it comes to RTG there is always a chance (given their record) that they will make it open source. Because unless CuLITHO (Or however NVIDIA spells it) is not monetized or restrictive in use (so proprietary with control over who can use it for what), then again, it might end up with monopoly. Then again - credit where credit is due - they have amazing researchers. And they deserve a praise (and raise).
@@rightwingsafetysquad9872 No. I hope they make great products. I just hope they don't get too many monopolies (it's almost guaranteed that they have or will have in some niches, but after it reaches critical mass, they won't care - look at their gaming GPU pricing. It is not "sane"). 2 different things.
NVidia's weight in HPC has become so enormous, this little niche of semiconductor design doesn't really move the needle much one way or another. AMD could get their shit together but doesn't; and of course there's Intel with Xeon Phi but they think that whoever hasn't run away to NVidia cannot move, that they have a captive customer base. I don't know that this sort of complacency is a good way to go about things, it usually doesn't end well. If anything this here goes to show why NV wins, because they keep pushing the field forward, they are not complacent not even for a second. Perhaps they deserve this monopoly, who knows, even though we all know it's not good.
Quality content, as always. *sigh* Alright, I do recall back sometime Eeeh.. ‘99-01 ish, Intel’s Celeron, and in some iteration, CU interconnects were added Yes, it was referred to as the CUleron Yes, the jokes were made.
11:55 Manhattan Geometry refers to a regular grid of rectangles (typically squares, never triangles). It takes its name from the regular street grid pattern of most of Manhattan Island in New York City.
Immersion lithography is exactly the same concept that is used with light microscopes, when wanting to take a look at a sample of whatever under the highest magnification possible with light microscopes, it is common practice to place a small droplet of a very specific kind of oil, with a very specific refractive index, the light travels from the light source, underneath the sample, through the prepared sample on a glass slide with a thin cover slip of glass, the oil droplet contact both the top surface of the cover slip and also makes contact with the magnifying lens, the oil is carefully chosen so that the light rays basically pass right through the oil/glass interfaces without being wildly refracted, it's basically a clever way to collect and collimate limited light to provide more effective resolution at very high magnification, high for light microscopes at least.
11:02 at the edge of digital reality, certainty and uncertainty flip.. Great background setup to this point.. very cool to see people leveraging uncertainty. We just embrace the facts when it works.. otherwise insanity right ?
Isn't ILP the reason why Intel thought they had quad-patterning "solved" for their 10nm process, resulting in the disastrously late transition from 14nm many years later than initially announced ?
Interesting. I'm a little bit scared about how dominant NVIDIA became concerning anything GPU or AI related in the past few years, but you can't deny they're doing great work so far. Another thing: I'd like to know if NVIDIA thinks about nanoimprint lithography as well, as it seems this technology might take a decent portion of the market when it comes to older nodes, eventually. Great video as always!
nvidia mostly focuses on advanced nodes so i don't think they will consider nanoimprint lithography until it can beat EUV at those nodes with the same or better throughput and lower number of defects, with lower power consumption
Rumor is that if AMD failed in their next gen GPU, they will give up on the high-end GPU market. AMD is no match for Nvidia. We'll have to deal with this pseudo monopoly for a while.
I know virtually nothing about this field. My question is why not use an electron beam like an electron microscope instead of UV light? Can an electron beam be made smaller than the UV wavelength?
@@SoundsLegit71it can and has been done, problem is speed of manufacturing Electron beam is narrow and takes a long time to work over a modern silicon wafer which is 300mm radius or around 640cm2 in area for the current sizes Think of how many beams which manipulates at electron sizes parallely do you need to work on this to match 10 to 15 second exposure times (I am assuming this) per stage of a 8 or 15 stage process on an euv machine?
I know this is besides the video and I'm sorry. But I love that you used a gif from Prodigal Son, I feel that it's a really great underappreciated series.
So far as I know, the primary company that makes the tools which can draw these pattern is IMS Nanofabrication, headquartered in Austria. Their Multi-Beam Mask Writers are revolutionary.
I really like this discussion, and would love to know how these enhanced designs might actually impact the chip performance both computationally, power/heat, materials optimization. Additionally, going a bit deeper into the physics of how/why your enhanced designs look so radically more elegant than the non-enhanced versions. Thanks, Ken.
BEA Saleh... that name rang a bell. I went to rip out my old Photonics textbook that I haven't touched in almost a decade and it turns out that WAS the same Saleh! :D
With silicon atoms being 0.2 nanometers, it's hard to understand how much further we could push silicon technology before having to jump to something like gallium nitride for improved frequency or power draw. There is only so far you can push new gate designs and better optical systems and even then, we're already hitting diminishing returns with heat and frequency.
Chip design and manufactoring are insane. They can make structures in atomic scale. How does the etching still work in the nanometer region? Liquids have a surface tension, how can they reach the desired points to etch?
But surely, there are many common blocks used in a design, so their masks that can simply be called up from a library and used without extra computation ? Surely it would only be the new elements, and interconnections of "blocks" that actually need to be computed. Thank you for a fascinating video, I'd love more detail on the techniques , if they could be explained as clearly as you have managed here !
What you suggest was somewhat true when OPC was first introduced. If the distortion was caused only by the immediately adjacent features within the library cell (or block), then the appropriate OPC within each cell could be predetermined and replicated for each instance. But as feature sizes and separations continued to shrink for each generation, the "area of optical influence" continued to increase so that features of adjacent cells (and blocks) would affect each other. What you suggest is certainly true for memory products (which are highly repetitive) and is surely used by memory cell designers.
I wonder what the optimizations would look like if you were to allow it to calculate in three dimensions 🤔 if the light isbeing focused down then seems to imply that it's coming into the design pattern from a spectrum of angles? So what if you printed a design image with a few layers? Could you account for the angle difference caused by the focusing to produce an anti-lensing effect, using kind of a shadow box design? Very interesting that these idealized patterns have kind of a morié a to them.
It's most likely to account for the effects of diffraction, that's why you get these kind of morie patterns. This is just my conjecture, but the principle of the fresnel zone plates might be coming into play here, where the mask itself is acting as a lens to focus light into the exact shape you want. Though disclaimer, I'm not an expert on this topic at all. Huygen Optics (the channel) has a really good video on this effect.
Maybe it just goes through the circuits to make the circuits, it's all the same thing, it can just check itself. Kinda like trying to draw yourself just by feeling around and maybe looking or measuring. Kind of like how DaVinci was able to draw extremely accurate anatomical pictures before his time... maybe the AI would need to "dissect" other systems. Anyways. Computers are already designing themselves. Once you hit a billion transistors they'll never really all be checked and mapped. As far as i understand that is.
In the early 1980's I used an Apple ][ to 'cheat' on my calculus & physics homework. Wrote a program to iteratively 'guess' an answer, then 'calculate backwards' to the problem, to 'hone in' on the correct answer. **It worked!** and I got a great grades on homework (but did poorly on exams and essentially 'flunked' out of the engineering program - which wasn't a 'bad' thing as I've become a **millionaire** as a **programmer** instead!) :)
Thank you for the video but there are still low frequency noises very hearable with good headphones (pop boomy sounds). They are distracting. Could you apply a high pass filter to audio during video editing please?
The thing is you just need one of each picture, 4 16 125, 2^2, 4^2, 5^3, thats not alot of pictures. Once you have the set of pictures, you must replicate the set of pictures. This way uses heavy radiation, or slow etching. Slow radiation has more spin charge, but goes a shorter distance. Like riding the motorcycle through the bee hive into the oblivion. Once a large amount of sets are produced, its taken to the stadium with the 215 ft curved reynolds wrap is gently glowin at 100 billion degrees. The aztec foot bed of graphite with the piano safety lights allows the workers to set the fuzed quartz down. The foolt path fills in with the layers 33 metal 35 alloy blend "glass" and a breeze washes over the field at 100 billion parts per 300 yards, its dealt it cuz you smlt it.
I'm 32 and I would give my soul and sanity to go to school in electrical engineering just to participate as an underpaid intern in this industry. Photolithography is the most extreme manufacturing possible. How does Asianometry seem to know everything about the history, concepts, science, and terminology in the semiconductor industry? He sounds so young to know the equivalent of a veteran in the industry. Mindblowing.
Why does the slide show 1:57 hopper 42x vs amper 23x wouldn't that just mean equal to ada love lace?? Cause if 3090 vs 4090 is uplift of 65% then hopper be around same ???? Just a question or it's more about there new way of processing and another reason to add $$ to price of chip?? Have a great day everyone.
this reminds me of how precision improved in machining as machines would progressively build better machines. in the 90s, i heard an aside where Apple was supposedly designing its next Mac and sought to buy a Cray supercomputer to facilitate its design. Cray supposedly commented that he was designing his next Cray on a Mac. Circle of life stuff there. 😂
I feels like some of this computation could be "cached" as reusable features. Additionally, simple proximity rules could probably be established for adjusting the ILT boundaries on a feature. That way you're essentially pre-calculating a lot of the work as you go. Then you'd only need to do a final pass for validation and adjustments to complex intersections.
11:01 Whoa! The 'ILT' yields totally non intuitive yet highly functional results. This leads one to speculate what 'Artificial Intelligence ' might yield: 1) The good: Even greater, yet less understood optimizations. 2) The bad: Maligned functionalities that only an AI 'being' can access, and will do so at a time of their choosing.
12:56 One day we'll have in space fabs for chips. These will use lithography, but with a mask that is several hundreds of meters in size. The light will go through lenses similar in size to that of the mask, but these lenses will be fluid tension based, like almost completely self correcting. Spin fluid in space, and you get a lens, basically. Transistors operating on microvolts, 50 atoms wide circuitry. Makes Ryzen 9 appear like dinosaur bones when it comes to computational effectiveness.
50 atoms wide wires were already in use. 7 atoms have 1nm size. 50 atoms have 7nm size. This is what 7nm process is capable to print on Si. Now 3mm is in use that allows 20 atoms width wires.
Still crazy to me that it took the 3D movie industry over a decade or more to use gpus for rendering, when it's litterally what a gpu was designed for.
SGI was built around the job of processing high resolution digital animation for the movie industry where every pixel was antialiased by combining 64 separately computed sub-samples (lots of computations). They created and used OpenGL which is what many GPUs now use. SGI is how Nvidia got their start. They cross sued each other and SGI agreed to let them use their IP plus cherry pick their best engineers to work for Nvidia. It was such a lopsided deal, I wondered if the new CEO deliberately sabotaged SGI as a potential competitor to Microsoft (where he came from). The CEO also created a joint effort with Microsoft to create an alternative to OpenGL (called Fahrenheit) which was eventually abandoned but Microsoft evolved it into DirectX which competes against OpenGL. Another vice president of Microsoft was made CEO of Nokia and immediately sabotaged their operating system and pushed a Microsoft cell phone OS instead (the CEO hired from Microsoft destroyed Nokia, same as the CEO hired from Microsoft destroyed SGI).
@@douginorlando6260 OpenGL is now care taken by the Khronos group, an industry consortium who developed the cross platform Vulkan API from AMD's Mantle which also stimulated DX12. There are still OpenGL programs like Second Life used today. Back in the 90's Silicon Graphics were one of many UNIX work station vendors, but had expensive machines. It was the Pentium and PC clones using 32bit Windows which totally undercut the workstation market, that suffered from fragmentation so had limited software. ISV's ported their software to Wintel and the desktops were much cheaper and more familiar to office users than specialist workstations which became unviable to develop.
Can you provide some link where the luminescent technology has been split into synopsys and KLA ? I see it's acquired by KLA. But no reference to synopsys at all.
I can't answer your question directly, but KLA is basically and inspection company (for masks and wafers) and Synopsis is an electronic design automation (EDA) company. Both companies need this software, as do their customers, circuit designers, mask shops, and wafer fabs. I'm sure KLA licenses this ILT software to all parties.
It makes me sad that we don't have chinatowns in Europe. Well i know there are one or two but in none of the large cities that i tend to be in. Or even not neessarily chinatowns but some sort of cute weird enclaves that straddle the line between a theme park and a lived-in livable space.
All the NVidia libraries start with CU for CUDA, (Compute Unified Device Architecture) their current overall parallel API and GPU architecture name, CUBLAS (basic linear algebra system), CUDNN (CU deep neural net) CUSPARSE (sparse array computation) etc., so the butt thing is an unfortunate coinkydink.