@@carstenraddatz5279if the average yield is 80% your going to have 20% of the chip be dead weight. Don't see the benefit of this design than just breaking the wafer down. You're not worried about size or space requirements at that scale.
@@richr161 Worries exist, especially with this type of chip. However at that scale you are very worried if you are TSMC and only get 80% yield. Customers won't come back if you don't improve that. Realistically you're aiming for north of 97% yield or so, I hear.
@@carstenraddatz5279 Tmsc yield is literally published average at 80% with peaks of greater than 90% on a leading node. I'd assume the nodes they keep around for companies who don't use leading edge are in that range, with all optimization going into yield rather than performance.
Moore's law is based on the observation that transistor density used to double every 18-24 months. This product does not even use the latest process. If anything it indicates that Moore's law is no longer applicable. Moore's law was never about performance.
Strictly speaking Moore's law was originally about the actual number of transistors per chip. Originally the TTL and NMOS and whatever chips were 5x5 mm max. so the chip size was fairly limited. The process generation improvements are of course what kept this cycle going until maybe 2012 but after that it's been a combination of increasing the chip size and shrinking the transistors. Moore's law was never thought to apply to one square foot silicon chips.
If only Cerebras hardware had OpenCL support and wouldn't need an own proprietary language! Would open doors to HPC/simulation workloads way beyond AI.
Bitcoin miners are now selling the heat generated into an industrial process. Datacentres may soon follow suit. They really need a way to recapture the energy costs
It is about the same as a clothes dryer, which is far less than I thought it would need. 24kW isn't too bad, a basic watercooling system with a pre chiller radiator would be fine. The 5V or 3V bussing would be nuts though, the amps would be 8000 at 3v and just under 5000 at 5V. 😮
Yesterday when this news broke I looked for a video on It, couldn't find it so I found your vid on WSE-2. Now today you deliver on the news regarding WSE-3. Nice work
I remember at one point in the 90s changing the jumpers on the motherboard to overclock my pentium from 60 mhz to 66 mhz, but never finding a way to cool it enough to remain stable. At the time I would’ve been thrilled to have that 10% jump in performance. My brain may have melted knowing that in 2024 I’d have multiple machines (including portable ones!) that are note only multi-core, but run at BILLIONS of cycles per second.
it's crazy to think of the amount of work that goes into creating these and then to sell 9 or 10 of them a year. it shows how niche the market is for this kind of processor
Can't wait to see what kind of performance boost the next gen Wafer-Scale Engine 4 will bring us!!🤤 Imagine that it will be using 2nm Forksheet GAA or 1nm CFET tech
@@n00blamer LOL what you posted is NOT Moore's Law. It's the simplistic theme park version spreaded by the media. The actual law is distributed over his article "Cramming more components onto integrated circuits" from 1965, where he referenced to the complexity in two-mil squares. Please do your own homework LOL
@@NoSpeechForTheDumb In his original article, Gordon Moore stated the essence of what became known as Moore's Law with the following quote: "The complexity for minimum component costs has increased at a rate of roughly a factor of two per year... Certainly over the short term this rate can be expected to continue, if not to increase." This statement captures the crux of Moore's Law, highlighting the exponential growth in the number of components (transistors) that can be integrated onto a semiconductor chip at minimal cost, with the expectation that this trend would persist into the foreseeable future.
No potato for sir. With these kind of chips I can't get rid of the feeling I had around 1990. 80386 was kinda in our grasp but yet, RISC told us: Nope you won't. This feels kinda the same again.
I do remember cerebras claiming that their WSE 2 was better than Nvidia's offering at the time but Nvidia seems to have the most hype out of all the companies involved in AI. My understanding is that having all the processing units on one massive wafer just makes everything move faster instead of having many discrete GPUs connected together
It uses a lot of power, but the sheer scale of it makes it more efficient. For example, the RTX 4090 tops out at 100tflops. That means it would take 10 of those to have 1 petaflop. This chip does 125x that, so you would need over a thousand RTX 4090's to equal the same processing power. Not to mention the 4090's would require over 400,000 watts of power, while this requires only 25,000. That alone gives it a huge win over Nvidia to the point that if anyone is even still using Nvidia, it's because they're laundering money, because ignoring the savings of 375,000 watts an hour to run it, can't be anything but a money laundering operation at that point.
Imagine if every cold country used these to heat buildings in winter. You could reduce heating costs to zero, and really get both compute and heat in virtually perfect harmony.
What kind of software / framework do they provide? I take it, it's not PyTorch or JAX? How hard is it actually to implement those models and the training code?
That single piece of silicon uses more power than my 200amp and 180 amp welders combined even when maxxed out. In fact it uses more than my entire house does 99% of the time. The stitching of rheticals is truly a remarkable innovation and something I want to learn more about. If I had to guess, I'd think there'd be a buffer in from the edge of the conventional masks then a second 'stitching' mask would be used to overlap rheticals and mark over them in a manner reminiscent of multi patterning in conventional lithography. Regardless of how it's done it's truly remarkable the level of precision and the fact that they can yield something this big on 5nm is actually insane. It seems they've exceeded their own expectations from what they initially set out to achieve. They were initially talking about being a few nodes behind but they're now basically on the leading edge and only one step from the absolute bleeding edge.
one thing I really want to see are smaller AI chips that are for personal/commercial use. I've messed with ai image generation and some other ai stuff but you can't really go any higher than 520x520 image quality with a middle ground GPU. if there are any products already like this please tell me.
@@unvergebeneid The yield rate for such a gigantic single piece of silicon with over a trillion transistors must be really low. I mean I think the yield rates for standard size CPUs are only in the range of 10-20%. The chances for one or more defects to be present on a giant chip that is 84x larger in size must be enormous.
Wait? Moore's Law was never size limited? So it's not just density alone?? I am most excited about the Qualcomm Cloud AI100 Ultra card tbh. It seems to be the best solution for workstation/researchers who mainly care about running evals which purely require inference. And 128GB per card... Would take like two A100 to match. And those costs easily 30k+ Please let Qualcomm know we want them! I am almost ready to pay 10k for a single card... If they can sell it to me, proof the software works and finally release some accurate benchmarks. Like I want to know what a single card can do for throughput with like a 70B model at FP/BF 16 Can they donate a WSE (1,2 or 3) to Fritz for dieshots? Also the door behind you spells MOOR - surely that's on purpose
Do they also make chips out of single or a few tiles? Like from outside of the square. It's an interesting method, gets one thinking about how else it could be applied, like a CPU getting 2 or 4 still-attached tiles instead of 2 or 4 of the same chiplet. Also, imagine if we were using 450mm wafers; that might not have been a profitable transition for most uses, but for this and silicon interconnect fabrics it would've been different.
Its a single wafer... normally chips are made from a wafer just like this and then diced up into smaller chips... the reason chips are normally limited to smaller sizes is the projection system they use to image the chip only covers a small portion the rectangular areas you see on this wafer.... since they are doing all this on teh same wafer though they can put ultra high bandwidth links between the normal rectical scan areas... and link it all together. there is far more bandwidth available here than you would normally get even through an interposer since all the layers are there.... instead of it just being one layer through an interposer. Making GPUs like this might actually make sense...that said planar latency on this thing is probably quite "bad", its part of the reason vertically stacked cache on Ryzen x3d has low latency is that going vertical is faster than going sideways twice as far.
The only thing that this video, and all of your other videos for a while now, is the meme-worthy "What's your minimum specification?" jingle you used to have... is there anybody else missing that or just me?
okay but what's the gemm/W, how are you guys solving non-stationary dataflow. inter core communication has an incredible power overhead. not to mention the developer nightmare of having to debug and troubleshoot non-deterministic compilation tools.
Routing is not "just that simple". If you turn off a core, it will warp the wafer from thermal stress. Even with their thermal solutions I would bet that microfractures will render a wafer dead after a few months or a year.
@@TechTechPotato but in that video you said ~"thats why I don't expect to see round chips anytime soon, unless someone does a waferscale round chip". So since this is a waferscale chip, why not NOT trim the edges and use as much of the wafer as possible?
The programming model changes a fair bit, especially with chained workloads. The edge corner cores end up burning power and being underutilised. Also cutting the thing would be trickier and more expensive. Then having similar cuts for power and IO. A rectangle keeps the shoreline identical and easier to design for
Moore's Law is about transistor density in a monolithic piece of silicon. There are creative ways of driving performance despite the end of Moore's Law!
The numbers here are less important than "can you buy it? can you buy enough of it? can you easily deploy it instead of competing technologies like gpus?". I guess the answer is no, since GPU prices are going up, not down.
Cerebras increased unit production 8x in 2023, and they're 10x this year again. One they've deployed over 200 and got an order for another 400 from one customer.
4 Trillion transistors; what is the optimized use case for this computing chip? If we compare this one to Nvidias solutions; what’s the main differences ? Thanks 🙏
I wonder if these sorts of Wafer Scale Engines can be combined with advanced packaging / memory stacking? To my understanding, large AI models are bottlenecked by memory capacity and throughput, so adding a closely-bundled cache or HBM stack on top could increase performance by a lot. That said, with the energy this thing uses, powering and cooling extra memory stacked directly on top might be a problem. Maybe if it has separate power delivery, fluidic cooling channels through and between the chips, etc? There's probably high-end customers who would want that if it gives significant advantages over H100s for their applications.
One of the kind, very unique solution only from Cerebras. They found a hole in the market otherwise they would be out of business by now and with inference cards and renting model they can also monetize pretty good as well.
You get it all wrong. Cerebras' wse-3 chips will be used for training primarily. They are not for inference. They sold it as a whole system as a supercomputer system.
@@catchnkill "Greetings to Chinese state hackers!" - u got it wrong and obviously u r not reading that I mentioned "inference cards" mentioning Qualcomm ASICs. Nice trolling though...
The difference though-Mr. Potato, how much more computer power does that 80 MW get them compared with one of the national labs top systems (not integrating a Cerberus WSC unit (Wafer Scale Compute unit-I know *_technically_* they're called "engines", but 🙄 it's a MASSIVE compute unit, not simply an engine performing classical work; I know; I'm jaded at the marketing department) into their super computer? I have read that the national labs have started mixing these monsters into their design architecture.) That sucker looks about the same size as one of Seymour's custom CPUs from when he switched over to Gallium Arsenide for the Cray-3 & Cray-4. RIP. Way ahead of his time. I just wonder how much more powerful our tech would be today (and power efficient) had the national labs not had the budget cuts and cancelled their super computer orders (well, and had he not died in that car accident in 1996). We might all have GaN performance compute cores by now cranking 40+ GHz. 😅
Yield doesn't mean defect free. Tsmc n5 is about 50 defects per wafer. WSE3 has around 9000 redundant cores, so cores with defects can be disabled and routed around. So there is logic redundancy.
That is not only wafer scale, that looks like it used an entire 13 inch wafer. The fact that they can make a single IC die that big shows how far we have come. I can only imagine the amps required and heat extraction required for a CPU that size while running at full power. 😮😮😮
How much does a 5nm process 300mm wafer cost at TSMC these days? $15000? That's a heck of an expensive chip even with no margins added for RDI, marketing, manufacturing outside the fab, distribution, sales, profit, etc.
@@ironicdivinemandatestan4262 The chip must really be worth it to have such valuation. The distributed memory and sheer bandwith is insane compared to h100 clusters and others.
Cerberus (also spelt Kerberos) is a vicious three-headed dog in Greek mythology, who guards the entrance to the underworld. He allowed the souls of the dead to enter Hades but prevented the living (except for a few exceptions) from leaving. Coincidence ¿???¿ 🔥🥵😱👽🤖
Thermal expansion issues are gonna be a bish. CPU load across the "tile" will have to be carefully balanced to even out the temperatures. Oh, and you cannot mount it onto a single substrate, it would have to be split into smaller segments.
Seems pretty silly to make it from a single wafer. better option would be chiplets on a silcon base for interconnects. This would address defects since, if a chiplet has a defect you toss only the chiplet & not an entire 12 cm chip. Or are Cerebas WSE are actually composed of chiplets?
The wafer is designed to route around defects. There's 45-50 per wafer, and any 'dead' cores get disabled. There are 1000+ extra redundant cores for this among tge 900k cores, so the actual yield is near 100%. Defects in SERDES or voltage/frequency takes some yield, but not a lot. I do say some of this in the video quite early on
@@TechTechPotato Not really practical. Using chiplets would reduce the costs & provide defectless processors. Even with some extra cores, sometimes the deflects will be in something that isn't redundant & cannot be disabled\bypassed I betycha in a future release, they'll switch over to a chiplet design. Overall the biggest problem with super large processes is heat dissipation & getting enough power. I suspect that there is a limit that cannot be avoided.
@guytech7310 they're showing it to be practical. It's a team with decades of experience, and they've sold over a billion dollars of hardware already. Yes, there will be some defects that can't be bypassed, hence why yield is *near*, not exactly 100%. Heat and power is also solved. They have patents.
I don't understand why that big chip wouldn't be better as a bunch of chiplets that could be binned by quality, then you would need to worry about _rooooooting_ around defective cores. Might also help with heat dissipation.
The extra routing is built into the architecture to streamline it. Cleverer people than us have figured it out, and the benefits are power and latency.