I think the fact that, without AMD, we would still be stuck with 4 cores and 8 threads from intel, maybe 6 cores 12 threads max, but at around £500, has to be the topic... I think the zen project as a whole needs the credit, as the foundation blocks for zen 2 were laid strong. When you are seeing *predicted* performance jumps that get people disregarding them because of that fact independently just shows the thinking that AMD made years ago has now come back to reward them, and deservedly so.
People couldn't comprehend the impact of having more than one core. Why have like 2 CPUs instead of 1 fast? people asked. The closest image people had was dual socket systems, and they were expensive and inefficient. And Intel spun hard on that too. Pentium 4 was an insanely strong brand despite its shortcomings.
Hes absolutely right though. Think of about ten years ago, thats the time of the first generations of the Core i Series with pretty much the same design that we had until Ryzen.
I'll say that last image looks like a good false-color image of the Matisse chip. So what you're seeing there is a center trace line, assuming that those line up with what is in the zen2 layout, then I agree, there must be a multi-mode infinity fabric link in the chiplet - up to 4 of them. And this might explain the IO die-size in the Rome chiplet - 4 complete replicas of the IO, cache and memory to allow 4 links to each chiplet. With that much logic and capacity, the size of the IO die makes much more sense. Instead of going chiplet-to-chiplet, each zen2 chiplet likely has multiple links to the Rome IO die. 4 IO links, to four different segments of the IO die, for each zen2 chiplet. Meaning that massive chiplet in the center of Rome can handily link each core to multiple resources simultaneously. Whether that link is to transfer from one chiplet to another, to transfer from chiplet to memory, or from chiplet to PCI bus. It can have 4 links to whatever each chiplet needs depending on the application and execution requirements. (I fully expect windows' drivers to really treat this poorly thread-wise until someone calls them on their poor software) I'd wager that the same thing will happen in Threadripper with much the same end result. And the reason? When you have more than 2 processor chiplets, it makes far more sense to centralize the resources to better control which workloads are sent where and what resources are accessed - so it might have some latency, but at least you won't have the latency of a workload having to go through multiple chiplets to get to where it needs to go. On a Matisse desktop with only two zen chiplets, this makes sense that there would be two channel links to the other chiplet, and two channel links to the desktop IO die. Which means the IPC performance we're seeing out of a single zen chiplet paired with the IO die is actually probably less than what we'd see if we had dual 4c/8t chiplets as you say the final production will be. (Although I really can't see AMD shutting down 6c out of 8 to make a 4c/4t or 4c/8t low cost ryzen 3 unless TSMC 7nm is really bad for defects.) There were only two links being used to the IO die, and the other two weren't going anywhere. Its also important that if for some reason a chiplet is saturating its two links to the IO die, it could (at a slight latency cost) use its dual links to its brother chiplet to get data back to the IO die if that chiplet isn't saturating its IO link. Its better to retrieve data from a round-a-bout path if necessary than to not have the bandwidth to feed the processor cores at all. Just my thoughts on it. I continue to believe that the IO die is literally the most important chip design decision AMD has ever made.
@@mbraun777 All but a few leading-edge distros will probably struggle on day-one, but not as bad as windows, with the leading edge being the first to use the newest kernels which will likely get day one updates or even before-day-one to support this topology in a way that makes use of the bandwidth it offers while minimizing the latency cost.
Datacentre guy here - - The threat from ARM in the datacentre was always the massive core counts running at lower frequencies but with very low power draw. It seems to me that AMD have now made a CPU that fits those same qualities but it's X86, allowing all your apps and code to run... Could you have a think Jim about the power efficiency of low frequency Rome and compare it in theory to the arguments in the industry for ARM servers?? Would love to see your thoughts.
ARM is on the way for the desktop. Things will get going once apple move to all arm platform for mac os. Popularity of win on arm will then follow. 5 years though, not overnight.
Oh cmon man. Don't make me feel even more behind with my 1700x. I am already lamenting how its already outdated >_< (Naw that's a good thing. World is changing faster than my money)
@@ArynCrinn IDK. This new stuff is pretty revolutionary. PCIe 4.0 to boot (assuming new MB) At least you can keep your RAM. And maybe use the old CPU and MB for another build (with cheaper RAM). That's what I usually do ¯\_(ツ)_/¯ That would work awesome for a homelab server. Run a headless version of CentOS or Debian on it. :)
That being said, I don't think my workloads need more CPU power then what the 1700x provides. Lol. Ill at least do 32 or 64GB of RAM to do crazy stuff.
Go AMD. Go Zen 2. Go Zen 3. The better AMD does, the more competition. The more competition, the better CPUs we all get. AMD deserves to win for a while. BTW, feel free to talk about Zen and nothing but Zen for the next 6 months. Then you can find something else to discuss. :-)
@@baronvonlimbourgh1716 : Hahaha. After being trounced for so many years, me thinks AMD will need to remain in the lead for one year to get used to that. Plus, Intel has all the money in the world to remain at least fairly close to AMD, while AMD didn't (and barely stayed alive). AMD needs to keep the pedal pushed firmly against the metal to get ahead and stay ahead for any sustained period of time. The computer industry and consumers need AMD to stay in the lead for several years, at least.
@@baronvonlimbourgh1716 : Sorry, guess I don't pay enough attention to politics. Oh, wait! It's impossible to pay too little attention to politics! :-)
All this information makes me pretty eager to see what the Zen 2 Threadripper series will be like. I upgraded early to a 1920x when I initially planned to hold out for those.
I can't wait for the 3700X. Buying one for certain when it comes out. Really gonna need it for all the plans I have, and all the footage I'll be recording. Unloading a camera with 80GB of footage. Pretty easy to compress with 12 cores.
@@Kazya1988 My plan is to buy a 2600X as that will already be a substantial upgrade over my i7-4710mq but I do plan on buying that 3700X if it comes true.
JoeAceJR haha I remember the first computer I saw with over 1GB. I was upgrading a local photographer's computer. When It booted and I saw that magical 1GB, it was like heaven opened up and angels were singing.
My first pc had a 30MB hard drive, 10mhz 286 CPU, and 1MB ram. My first real build was a TMC TI5VG+ super socket 7 board with a K6-2 350mhz. 64MB ram and a Riva TNT 16MB video card. I have been a proponent of AMD's best bang for the buck model ever since. Have not been this excited about the pc industry since the original Athlon processor launched.
Zen 2 is New Era for CPU, same transition was decade ago from 1core-2c-4c-4c/8t final. Zen 2 is the new Sandy Bridge, tremendous potential for another decade to employ.
I think we should be a bit more carefull with our speculation, because the Ryzen 2 first was going to be 30% more efficient then Ryzen 1+ but two months later was already drop to 25%. Also after the presentation everybody was disappointed over the demo that was shown while the performance gain and efficiency of the demo CPU was absolutely mind boggling good. And that was all because of the unrealistic high expectations that the hype has created. On top of that you should not forget that these typse of channels make a 'living' from demostrating information And how more out rages the information is how more traffic they wil recieve and more mind share.
Looking at that road map made me more excited for Zen 3 2020 than Zen 2. I have a R5 1600, and a Zen 2 would be a massive upgrade, but not necessarily a needed one. Waiting about 1 more year would net me an even better upgrade, one that I would feel confident I wouldn't want to upgrade from for a good while. But then there's the next thing after that.....PC building can be ridiculous with the feeling of always wanting to upgrade something even when you don't need to.
I'm in a similar boat. 1700X is going strong, but I could really do with more single threaded performance, and my virtualisation shenanigans are begging for more cores. Hopefully AMD come good on their promise of supporting AM4 through to 2020. If I can buy a Zen 3 chip and drop it straight into my existing setup that is going to be amazing.
@@TheBackyardChemist TBH I'm expecting a new socket for DDR5. it seems very unlikely that we'll see an AM4+ that supports both due to the way DDR5 works. It all depends on exactly when DDR5 is consumer ready. Many people have said 2020, but that could mean late 2020, which really means we won't see it until 2021 (Which would be my bet). AMD are on record saying they want to be first to market with DDR5 though, so it's anyone's guess what they'll do.
My 1600 does bottleneck my gtx1070 since I play games at low setting 1080p with 144hz monitor, I will get an 8 core zen2 once it released. I really wish they can release it Q1 2019.
Another awesome video Jim and feel free to talk alot more about Zen 2 in future videos :) Zen 2 really is looking like it's going to be an awesome cpu in every task, including low resolution gaming, if all this info turns out to be true.
I honestly didn't really think I could get excited for hardware the way I used to in the glory days, but by God, I feel like a kid waiting for Christmas when it comes to Zen 2. I swear this is like total nerd viagra.
*PC Gamers:* Now that AMD is having their products being fabricated at TSMC instead of GlobalFoundries, it's lights out for Nvidia and Intel. All AMD needs right now is for Software (Games) to catch up to Hardware (GPUs, CPUs). Chances are, AMD wants developers to implement Ray-Tracing instead of them creating them own proprietary library a la Nvidia's RTX Gameworks. It will be more work for developers though, but with the new Next-gen consoles coming out next year, developers will be able to put in the work. Just look at the news about AMD being able to do DLSS via DirectML. By the way, people should look at AMD's 8-bit compute performance against Nvidia.
AMD has also been fostering Ray Tracing support through various tool sets for quite some time in the game dev/commercial world where the ecosystem may be maturing quite quickly behind the scenes. Its a presentation from almost a year ago but still valuable from the insight perspective to what may be coming up: gpuopen.com/gdc-2018-presentation-real-time-ray-tracing-techniques-integration-existing-renderers/ at the time they had the RX580 doing over 200 MEGArays (not gigarays like current NVIDIA products) and *I* am quite interested to see where they land with the upcoming GPUs and if this next generation will even support Real Time options. I have seen zero information on performance from the VII in these measurements.
@@baronvonlimbourgh1716 Yup this time AMD is letting nVidia get the backslash while they just wait for the technology to be adopted widely, we all know that AMD cards have always been exeptionally good at computing tasks, i bet once nVidia have pushed developers to adopt RTX they will compete pretty equally.
Baron von Limbourgh yah I mean what Nvidia has is not even real Ray tracing. They are just ray tracing some of the parts while 98% is still rasterization. So yes it’s 100% a marketing gimmick the backfired on them.
@@yottaXT i think it is developed for the growing rendering market. I think they saw an oppertunity there to sell a lot of cards. Or any other market where these cards do well. And when they have the design anyway why not push it as a gaming feature. I bet this what happened. It never was primarily a gaming feature.
Things have not been this exciting since early 2000's when AMD released the first 1 GHz processor. Very excited for AMD and what they can do to eat away at Intel's pie. As for first computers? Well, mine came standard with 256 kb of memory and I did buy the 256 kb upgrade. As for PC, my first graphics card from '95 was maybe 1 GB? I still have it but don't even know what card it is. Thanks for the work you do on these videos.
I had one of the fabled AXIA 1GHz Athlons in the early 2000s which overclocked to 1.4GHz just by increasing the multiplier. I remember sitting at my PC one evening and hearing a strange noise, the PC freezing and smelling something burning: the clip on my CPU fan had failed and the CPU fried instantly. I cleaned the heatsink paste off and could see a discoloured section all around the die. I almost cried because the AXIA processors had sold out and I couldn't afford a 'real' 1.4GHz chip. (I must still have it somewhere because I could never bring myself to throw it out...)
I still have both of my Socket-A based AthlonXP cpu's and motherboards. I've been wanting to do a retro gaming computer with the better of the two cpu and motherboards. But I don't have a spare case to put them in sadly.
@@adriankelly_edinburgh Intel fanboys at the time would also try to rub it in your face that they had thermal protection that would prevent them from having that same problem. I know several motherboards added thermal protection for Athlon/AthlonXP, but I don't think AMD added true thermal protection until the Athlon 64, that is also when they added the heat spreader. Athlon CPU's were much cheaper, so if you REALLY had to replace them you could for much cheaper then an Intel, but damn.. I'm glad I never went through any of that with my AthlonXP 2000+.
I found the old chip at the bottom of a box of old bits and pieces. I can still discern the pencil lines on it where I'd redrawn the broken connections that were meant to limit the multiplier. Happy days :-)
Don't feel bad about talking about Zen 2 so much... You're right, it's the biggest thing happening in tech right now, and I can't get enough of it. I've loved every minute of all of your videos since I first found out about your channel in November. Keep it up!
indeed trastewere thats exactly what im going to do-even though ive only had the 2700 x for a month ha ha ,swore id never buy another intel chip again.it was hard but i stuck to my guns,so go to hell intell
Great Video man! Thanks for being on top of this!! Really looking forward to seeing what this new series can do. I wish we had more info on launch date as well as launch date for the new motherboards.
@AdoredTV: Brad Sams at Thurrott.com leaked that the next Surface Laptop will use an AMD APU in his recently released book “Beneath A Surface.” Presumably, this would be the Surface Laptop 3. But what is the processor? In the past, Microsoft has gotten first dibs access to processors for their Surface lineup (e.g. the Surface 3 in May 2015 had the Intel Atom z7-Z8700 months before anyone else; I believe the Surface Pro 4 and Surface Book had first dibs on the Skylake Core mobile CPUs as well in fall 2015). So it would not be unprecedented for Microsoft to call first dibs on the initial supply of Zen 2 “Renoir” Mobile chips for the Surface Laptop 3 and-since the Pro release is generally in lock-step cadence with the Laptop-the Surface Pro 7. Add the fact that Panos Panay, who had originally led the Surface Team, now is the chief product officer at the head of the Microsoft Devices group which includes Xbox that already has a close-knit connection with AMD for their Xbox systems and the path becomes clear. AMD, through their long partnership of Xbox co-development, can now reach to Panos Panay, who now oversees this both Xbox and Surface and bring their technology in an industry first-style move to Surface as well.
If the crossbar communication between cores becomes a reality, I'm all in! This processor design will be a work of art! I'll be buying this Rembrandt as soon as it becomes available!
I had a 1MB VRAM graphics card in my IBM PS/2 Model 55SX back around then... it probably cost about the same as these damned nVidia RTX cards do these days!!
I for one welcome any new videos about the Zen 2, as long as there's some new information and/or analysis (which you've delivered on every single time so far). I'm really exited about this, and this brings memories back to the 90's / early 2000's when there were constant massive improvements in the performance, as opposed to the near-stagnation of the last decade until Ryzen CPUs became available. And I also love seeing the underdog getting ready for getting on top of the game!
If AMD actually pulls off chiplet-chiplet communication along with the I/O die, that will be the absolutely most impressive thing about the whole thing to me. Doing that effectively creates a high-speed, high-bandwidth asynchronous system... Not exactly something that's easy to pull off in silicon. There's a reason why all modern CPUs (outside of some academic experimental asynchronous processor) use strictly hierarchical structures like a single bus internally. Once you start to go away from this architecture, it just gets really hard to make all of the bits arrive where you want them to be in time. So, this is why I'll be very skeptical about the chiplet-chiplet links, until there's some extraordinary evidence to justify the extraordinary claims.
@@kazedcat I can't find any sources for that, what exactly do you mean? All I see in zen1 is two ccx's being on the same sdf and scf, which isn't really comparable. Or do you mean the IF links between the MCPs? Because my understanding is that these are only used to link the sdf of the two chiplets for communication with dram etc, which is more similar but also not really comparable imo.
@@squelchedotter The communication between CCX inside a die is asymmetric compared to communicating on CCX outside die. So in Threadripper if core9 needs to communicate to core1 the communication path is longer and have to hop on two sdf and two scf. Compared to single hop with core1 to core2.
usually wait until the end to thumbs up a video, but once you started on some stuff I hadn't already heard / speculated myself, I had to do it...couldn't have been past 8:00 in. always enjoy the content!
I Love your content. it's hard waiting for you to upload content. Keep up the great work and i really appreciate the information and honesty you bring to us.
@@fastcx That's where I got spoiled, we got a p120 16MB with a Trident 2mb PCI video card. It ran Descent really well but when Quake 2 came out I was gaming at less than 15FPS on a 320x240 resolution.
During my current heartbreak there's little that helps me more than a nice, long AdoredTV video along with some unhealthy food I will regret eating but will do anyway because I need to numb myself to the pain
The hype is real... Probably surreal and like you've put it... the most talked about topic since i've been into computers and thats close to 15 years... Great job Jim... The commitment you have is admirable... I'm admiting that i'm not a patreon but as soon as I get my student life together and can afford to become one I will because your work and the work people like you do should be supported in one way or another! Regards from Slovenia ^^
And what if for Zen 2 Renoir APU's they have global foundries produce IO chiplets with small graphics sub-system integrated into there..? 🤔 I bet nobody would have seen that exact config coming, but it might make some sense. GPU as it's own chiplet might get pushed back into future products so as not to take on too much innovation risk all at once?
My first x86 CPU was the AMD 486 100, been an AMD man ever since. I've had good years and lean years but I am blown away at just how brilliant AMD has become. From worst to first in an amazing way. I still own the AMD Thunderbird (the first 1Ghz CPU) and I still have a working AMD quadfather FX74 rig. Now my main rig is a Threadripper. It's good to be an AMD man.
Wrong; it's actually Zen 1, then Zen+, then Zen 2. Better question: is it worth talking about Zen+? Reasons for doing so: small boost in overall performance, Threadripper's max core count doubled; reasons for not doing so: no improvements to Epyc for reasons explained in vid
@@InvidiousIgnoramus I'm sincerely sorry about that; I legit thought people forgot that Zen+ was a thing. Whatever; guess I'll go down as someone who doesn't have a sense of humour.
What a great channel! Keep up the great work. I am literally waiting for Ryzen 3 to hit the market to upgrade my build. My last AMD was an Athlon 64, from which I then went to the 2600K (which, let's face it, was fantastic). Been on it for the last 7 years and it's seen me through 2 PSUs and 3 GPU upgrades, but it's now high time for an upgrade at the most fundamental level. I've been biding my time for this very moment. Thankfully, it looks like I don't have much longer to wait!
"threadripper will be strictly workstation" i will still buy threadripper non-wx as an everything cpu. im fairly sure AMD will also still advertize their non-wx threadrippers for everything, including gaming. its a wonder if they even make workstation specific threadrippers this time.
I dont think WX Threadripper will even exist. The reason they were called WX was because two dies didn't have direct memory access. IO die in 3rd Gen TR will take care of it. Though its worth pointing out that Level1Tech found that the performance drop in WX chips was because of Windows, not due to 2 dies (16 cores) not having direct memory access.
@@stayfrost04 me either. with the I/O die they can function the same no matter how many cores/chiplets they put in the package. even a 64 core i dont see getting a wx branding or purposing, but a power requirement bump if the clocks go higher than the 2990wx.
Saw 8350's hit 5.0 ON AIR COOLING..... lol. Even then AMD was working their way to what is coming out this year. Think on the ccx design a minute. It just wasn't there yet though, and people bashed Bulldozer for years. Yet i had an 8350 for a couple years, and it was just fine...
Super-Jim to the rescue ... Just when you almost feared that Jim went back on vacation (to make up for the interrupted one), when all the other tech news are uninspiring and lame and you are threatened to succumb to boredom - in short, when all hope in the world of tech fades: in jumps Jim at last second and wrests you from the claws of lethargy. Thank you Jim for your relentless effort to feed our brains with useful information!
Jim, I really don't understand why you keep clinging to the idea that all Ryzen 3000 CPU's will consist of two chiplets. It makes absolutely no economical nor technical sense. I think you demonstrated enough that those chiplets will not have significant yields issues. Neither should TSMC be a problem seeing as larger ARM chips have been produced on TSMC 7nm for months. Further if a chiplet has a defect, there is a good change the part can still be salvaged. The majority however should still be fully functional 8c chips. Why would you take two fully functional chiplets, disable halve the cores and sell them for $200 as a Ryzen 5 while one of those chiplets can do the exact same thing. It doesn't make any sense. Nether does it make sense on the technological level. There will obviously be an IF connection on the substrate between the chiplets and the I/O chip. Substrates aren't exactly the most complex not expensive part of a CPU so making every substrate capable of holding two chiplets make sense from a production standpoint. There however doesn't seem to be any reason to actually have to place the second chiplet on there. The only reason I can think off would be a limited bandwidth between the I/O chip and the chiplet when it comes to RAM access and latency. AMD however seems to have demoed an 8c Ryzen 3000 ES with one chiplet against an 9900K. For me that says everything on the technological level as well. Then further the main target for a hexa or octacore will be gamers. Games are latency sensitive and a lot of those games are not able to properly recognise differences between threads and jump loads between threads all the time. Introducing more latency by splitting the CPU design into two chiplets and having games throw calculations between both chiplets makes absolutely no sense to me at all. I think AMD will release a bunch of hexa and octacores as Ryzen 3/5 parts with one chiplet. 4c might also be Ryzen 3, but it could also be branded as low end Athlon. Then Ryzen 7/9 will be two chiplets and 12/16c SKU's. That makes sense on both the economic and technical aspects.
It all depends on clockspeed yields. Can they bin enough chips with all cores clocking high. And those chips will most likely be also most power efficient They will also need all core high clockers in 16 core ryzen and in threadripper. As well as the most efficient ones for their epycs. It is a good solution if they expect to not be able to produce enough high end bins to fullfil demand, or if they simply do not want to run that risk. Especially if it in performance only makes a marginal difference between 1 or 2 chiplets. Plus it doubles the cache when using 2 chiplets. There might even be a double and single chiplet version. It could be the difference between the x and non x version. Or a single chiplet being branded as black edition if gaming gains turn out to be substantial in a single chiplet setup. In the end more production means better binning acros the board. So doing dubble chiplet where possible is a good strategy.
@@baronvonlimbourgh1716 I have to agree, 4 best of 8 cores + 4 best of 8 cores should yield significantly better results than the average 8 core single chiplet in my estimation. Keep the best 8 core chiplets for high end of each segment.
@@mattroy3154 indeed, 2 chiplets could also enable higher clock sku's then would be possible with a single 8 core chiplet. If chiplets with all 8 cores at 4.5 base for example are relitivly rare, but ones with 4,5 or 6 cores at that clock are very common they could not release a single chiplet 4.5 base sku, but with 2 chiplets that then does become possible. A lot of variables go into making that kind of evaluations. Yields, bins, performance impact, economics, production capacity and what performance brackets they want to reach with their products. And probably a lot more still we have no idea of.
Wholeheartedly agree. For my interpretation Zen2 is obviously yielding really well meaning there is plenty of fully functional 8c dies to cover all use case scenarios. The only exception being the possibility of a dummy die being added to aid with the application of the IHS.
@@baronvonlimbourgh1716 A nice side effect of this binning process could also be improved thermal characteristics: it must be easier to cool the CPU if the heat is spread across two chiplets rather than concentrated in one (with potential gains here for overclockers).
Given that they cannot completely remove the latency between chiplets even with IF decoupled from memory speeds, I hope that you are wrong about the 8 cores being done via 2 chiplets with 4 cores each. 1 chiplet with 8 cores is vastly superior.
lmao i remember a dumster intel i486 having 8mb ram soldered to the motherboard, with 2x2mbyte 70pin (simm) RAM extra, missing the old days of optimized sostware tho (maybe make a vid of how unoptimized stuff is nowadays?)
I think two d1ckheads see a way to try and make a buck off the backs of others. That is what I think. I think these two d1ckheads should kick rocks. Maybe a piano will fall on both of them.
AMd is figting the lawsuit instead of settling,that only means they have a big shot of winning..the lawsuit doesn't have anything to do with with Ryzen..I don't see any point of your comment.
Well all those lawsuits are always the same, people complaining it was not 8 cores, well thing is no matter what they say and show the fact is that it is 8 cores. The 8 cores are there and that can't be denied, now if they are mad that they bought 8 AMD cores and thought it would perform better than 8 Intel cores then that is another story. More cores and more mhz and even a smaller architecture does not always mean it will perform much better. The only way to really know is to see reviewers and get the right information by them before purchasing a chip I mean cmon we are not in the 90's and early 2000's anymore, we have a ton of reviewers in every social media so if people keep making mistakes like thinking that the chip they bough will be better just because it has more cores or more mhz then they need to stop living under a rock and the fault is on them.
*Just what i told you last time, Zen 2 is capable of 5+Ghz. Don't forget that the engineering samples probably were 3700's (NOT 3700X's) downclocked to Zen 1 Speeds for direct comparison* I expect the 3350X, 3650X and 3750X to launch in limited quantity in May as cherry-picked silicon for a 50$ price increase over their normal counterparts coming in June.
My first PERSONAL Graphics Adapter, had 16 MB and was a RIVA TNT (1) model from Diamond 3D, I was 7 or so and my uncle gave me a PC to play Modem connected Duke Nukem 3D, Shadow Warrior, Hexen, Descent, and so many other classics with him over my summer break and after school! It was a Pentium 90 MHZ PC, which for 1993 was badass, he was my favorite uncle at the time to say the least! AUTHOR NOTE: RIVA - Realtime Interactive Video Accelerator - was Nvidia's Graphics branding BEFORE GeForce existed, and years before GTX! I later upgraded to a 32 or 64 MB RIVA TNT2 before eventually getting a Voodoo3 for said PC.
I like to read into guys commenting on how you could be either right or wrong (over many different things), but what personally is why I keep you in very high regards is that you nail things in my mind where I go, hmmm did he just over see the difference between the raven ridge and summit ridge design... and you just spit it out a minute later! I think hype is dangerous, but what your giving us is really interesting insights, cutting through the grass to find gems of information (reddit and general sources being the grass). Thanks a lot for your content!
Awesome video Jim, love the way you put all the pieces together. Ryzen 3000 for the desktop based on Zen 2 will be absolutely astonishing. Can't wait! Also don't stress to hard about making videos on the same topic. Keep it going, I'm sure everyone is just as curious as you are
Suddenly realised : you can look at an IO die as a resurrection of the north bridge chip. With good developing benefit/return, can work on optimalisation of architecture on both categories of chips simultaniously.