Context: DDR5 prices were sky high with the release of of the 12900K. That and the sky high prices of DDR5 boards too. Compared to DDR4 prices and the price of the 5800X3D at the time, it was truly good value for its performance.
@@jonivilander5758 Not worth it I think... I upgraded to the 5800X3D and got some big improvements in some of my builder and management games, but little to none in others.
@@Gazer75it’s important to know which games he is playing because in MMOs, sims and a few other online games the cache makes big difference and especially in the lows. I’ve experienced it myself! Also my friend replaced his 5950x for 5800x3d, because we were doing 50v50 wars and he couldn’t believe how much better the 5800x3d was in that
£314? Bro the 2nd hand market is being flooded with 7800x3D’s because of zen 5 being around the corner. I can pick one up for 260 euro here in the Netherlands that’s like £220
Availability of lower CL 6000 and 6400 memory also went from being premium to the standard cheap option. So most people building today will have decent lower latency memory that'll just work through XMP with zero tweaking necessary. This will give a couple percent more performance in favor of a DDR5 config.
Steve, the "lines" separating the graphs per CPU tier makes it far better to handle then without. Same for those % numbers. Much much easier. Good idea 🍻
@@MrDeathknight1 The video is about DDR4 vs DDR5 performance. The section about the 5800X3D is obviously just discussing why this testing is being done...
@BananaDynastyX fair I just thought it was about aging v cash and then it was mostly about ram speeds ddr4 vs ddr 5 are the cash latency s coming down on ddr 5 now are we even in the 20s yet cuz the band with s rnt making a huge difference I don't think
It's like an old 80s slasher flick poster where the killer is staring you down before he goes in for the kill. Steve channeled that Freddie Krueger energy hard for that thumbnail.
same, bought and built my pc with the latest ddr4 and am4 availability while ddr5 and am5 were barely emerging on the market, can't be happy enough with the purchases
new revisions of DDR release every 6-7 years, and it's adopted year or two after official release. If you are the type of guy that wait for the new tech to come down in price, I would say you will wait until 2030 at the very least. so upgrading to DDR 5 somewhere until then isn't unreasonable.
@@brockysthoughts1662 Oh, I'm sorry. I didn't know you could see the future. You understand the 2 at the end of Cam, would imply that Cam[1] came first, right? Cam has been around for a while now, and we have to wait to see what happens to know if it will replace DIMM. We've had plenty of different ram form factors after DIMM became a thing, and plenty of form factors within DIMM itself too. Sodim, Udim etc.
Would have been nice to see the 5800x3d in the final charts. I know you guys do not recommend AM4 but for anyone that wants a board with two 8 lane PCIE slots, the price of AM5 board is literally double so the 220 dollar increase in platform cost does make AM4 a consideration, particularly with the 5700x3d at just 200 dollars. I mean that's the CPU and board for the cost of just the AM5 board.
@@paulrmurrayful I get that, but it is not my concern as I already have a GPU. I just want to be able to upgrade from my X99 setup and still be able to have a GPU a network card and my SSDs.
Wrong and outdated, decent am5 motherboard can be found for as low as $150. Not $220. You 5800x3d fanboys really love inflating numbers for new gen, reminds me of 10series cards owners. We get it was a good investment, but times change
The issue is that you're buying into a dead platform. I tell anyone looking for a new system that if they want the best gaming performance for 2-3 years is go for am5/7800x3d. You get the benefit of potentially doing a cpu swap down the line (if amd does a similar support like the am4 line) and it's easier to swap a gpu out then a whole motherboard/cpu. A solid foundation makes for an easier time keeping a system updated for years imo. That and you're looking at pcie gen 5 which can be split easily because of the insane bandwidth that it run at. Highly doubt anyone will actually use up a gen 4 lane much less a gen 5 lane. I mean hell we're just now getting to the point where gen 3 is a bottleneck.
@@n3xus236 Yeah, I recently updated my son's system from a R5 3600 to a 5700x3D, which made a lot of sense since he was already on AM4. When I built new for my stepson, it was 7800x3D all the way, because it didn't make sense to go back a gen.
tbh it's still one of the best value gaming cpus out there. u can grab a 5800x3d around 300 bucks these days and it's simply unmatched in gaming at that price point. especially since older mobos and ddr4 ram is dirt cheap.
Ehh, i dont know why everyone has this mentality "it's not the biggest number, it is not good anymore". Most people are still gaming at 1080p/1440p with 120hz/144hz panels, a 5800x3D will still suffice for such a scenario.
I've seen people suggest that the 3D vcache makes memory bandwidth less important to those parts. It might be interesting to test something like the 7800x3d with slower memory to see if it's less affected than expected.
I've got a similar set-up, just a 6900XT at the mo. Also on Linux (with dual boot into Win 10, which hasn't been used in months now!). Didn't see the point in going to AM5 at the time (no AM5 X3D CPUs back then, plus expensive RAM etc). So just got the 58000X3D instead. Next buy will be a new GPU, but I'm in no rush, I'm mostly strategy, base/city building, and RPGs. Will likely see what comes out later in the year from AMD, if anything.
I've got a 5800x (2020) no 3d paired with a rx 6900 xt (2022 got it for €.500)), running Debian testing with wine-staging using Heroic (as an Epic and GoG replacement) and Steam launchers and some games direcly on wine-staging no lutris or whatever. Will still be some years before I'm building a new rig. This will last me for the next 2 to 5 years depending how gaming/future evolves. No need for the latest and greatest yet. By that time will look around for the best value for the money like most sensible people will do.
I'm glad you tested with DDR4-3600 instead of 4000. The former is more representative of what people actually use. Locked Alder Lake CPUs don't even have the option to use memory speeds much higher than 3200 in Gear 1 mode.
Yes, no point on testing DDR4-4000 when it's usually even more expensive than regular DRR5-6400. Even 3600 cl14 is unrealistic as most people will buy 3600 cl18 or 3200 cl16
I understand your point but he is testing K versions so the results presented even with ddr4 would surpass what one would get with locked version. And quite bluntly if there are people running 13600k/14600k and 13900k/14900k with ddr4 3600 I do not know what to say except that they should not have purchased those cpus and use ddr4 . The point of K version is to tweak them, and 13600/14600, 13700/14700 and 13900/14900 can run 4000+ in 1 to 1 ratios. I run my 14700k at 4533 16-16-16-36, and that allows it to match ddr5 7600+ in many games. in fact in some titles , like arma , the ddr4 setup beats my other ddr5 system, a 14900k running ddr5 at 8400. Let me repeat 4533 1 to 1 beats ddr5 8200 in some games, and probably will beat 8600 or 8800 also since the increase in performance above 8200 is very limited to the point i prefer to run 8200 instead of 8600 in order to limit ram temperatures.
My 5800X3D was on a flash sale. So it cost me 290 CAD. I replaced a 3600X. 12900k was never going to be able to compete on price with a drop in solution that good.
X370 taichi with a 5700x3d here. 3rd cpu in the same board. Probably end up using the same board for a decade, as someone who's been building computers since 2004 it's pretty mind blowing to me. Used to upgrade virtually everything every 2.5 years. Even case design has come so far that it'd be hard to justify upgrading a good case made in the last 8 years. The barrier to entry for a new builder has become more expensive over time but 10 years ago nothing you bought was ever worth keeping. The recurring costs have actually gone down. A 300$ hard drive in 2005 was functionally worthless by 2008 for instance. But if you bought a 300$ nvme drive in 2021 it wouldn't be a paper weight today.
This is my rig except on a 5900X. CL14 @3600Mhz is so good, I don't think I'll move up until there is an equivalent in the current or upcoming platforms.
What's the point of running b-die on a 5800x3d? Any shitty djr could have done the job. Also, if you're running your 3600mts cl14 b-die at stock, you might as well run at jdec spec then sell your cpu and replace it with an x3d chip.
X3D's biggest selling point isn't that it's the fastest - honestly, the average % difference between all of the CPUs mentioned here mean nothing at the proper resolution, because 1080p on a 4090 is silly. No, the real selling point of X3D is the 1% lows. Smoother gameplay > highest fps any day. And that is where even the 5800X3D sometimes comes out ahead of the 14900k, while the 7800X3D is on another planet.
@@hivemind8817When compared to a faster CPU using faster RAM, yes. Compare that to the DDR4 results and it’s better. Now compare the 7800x3d to Intels equivalent, the 7800x3d is always superior in 1% lows. Cope’n’seethe
The real value of the 5800x3d was (an sometimes stills is) the upgrade from older Ryzen CPUs. It really pushes your PC on a whole different level when you drop a 5800x3d in your 2700x system. For a relativ small amount of money you are able to breath enough CPU power to your system to last a fair bit longer.
I'm planning to do just that very soon to give my current desktop a few more years of use. Updating from a 3600. And since most of my games are CPU intensive sims (Stellaris, Oxygen Not Included, Rimworld, MOBAs,...) I don't even need a new GPU to max it.
Agreed 100%. 5700x3d has even better value. I find it comical AMD actually made a part for AM4 users right before their AM5 launch that literally has 78% of a 7800x3d's performance according to the 13 game average at 1:54 for less than half of the cost it would be to get a new motherboard, new ram, and the new cpu.
The upcoming 5900xt are gonna be an epic productivity drop in upgrade for anybody still on AM4... When they drop into $200 territory my old 2800x productivity rig can be productive again lol.
I just did a full renewal of my PC from late-2019, from 3700X and 2060 to 5700X3D and 4070 Super, doubled the RAM to 32GB, all on the same motherboard. I should be set for the next while, maybe even till AM6.
@@toniciprianiYeah I’ve got a 3900X with a 3060Ti. 5800X3D or 5900X not sure which. I’ve got 32GB of RAM but it’s 2400MHz so maybe I should upgrade that as well
Great information! As a long term viewer, I like this multi-pronged up format!! What is great about keeping the older CPUs / hardware in the testing is buying advice in the sense that it helps you compare your current / older system. Even a CPU or two from those series. Like I'm on 10th get and would love to know how it compares, although I'm sure if I spent enough time digging all the information is there in your archive of videos. Excellent content. I really appreciate all the hard work you both do!!!
Another great video as always. As soon as I saw the title for this video it made me wonder if you would consider adding "Escape from Tarkov" to the battery of tests. There seems to be either a lot of conflicting opinions or misinformation, but for a few years now lots of people say it doesn't really matter too much about your GPU or CPU, but rather your RAM. A large portion of the player base can't/wont play one map in particular, despite many having decent, if not high end PC's. Between this video and your other recent video covering capacity, it's made me realise that there is very little coverage on RAM and I am personally really appreciative of this content
Yes the 12900K + DDR5 combo today is faster in modern titles than the 5800x3D + DDR4 combo But you also have to remember that back when the 12900K was new, to get its full potential you needed a decent Z690 DDR5 MB as well as DDR5 DIMMs to go along with it. DDR5 back then was expensive The 5800x3D on the other hand ran perfectly fine on cheap potato B550 MBs and 3600Mhz XMP D4 was much lower in pricing than even cheap potato 4800Mhz JEDEC D5. And for the 12900K + DDR5 to beat the 5800x3D + DDR4 by a decent margin, it needed 6000Mhz and above D5 which was EXPENSIVE AF
@cameronbosch1213 I'm comparing in the context of how market conditions were when both the 5800x3d and 12900k were brand new. Of course we know DDR5 price nosedived 1 year after Ryzen 7000 launch but by then it was already half a year since 13900k was launched
@@adlibconstitution1609 The video's title is "Has 5800X3D Aged Worse Than 12900K" Therefore we are comparing how they were when they were both brand new back then to buy, and how they age till today in terms of performances vs money spent back then to enjoy from then until now
Damn some of these differences are mind blowing tbh I remember when DDR5 launched 4 years ago (Yes, it was 4 years ago now!) and in gaming benchmarks it was only like %2-5 faster than DDR4 at best if you were lucky (some games it was basically the same!) and now to see %20 and even %30 gains is crazy. I know DDR5 RAM has got a lot faster since it launched. I mean what was the best at launch 5600 at cl38 or something like that? I wonder how much of this improvement is just the DDR5 RAM being faster vs games actually using more memory bandwidth?
Isn't the idea behind the 3D V-Cach that you get all the performance uplift of a faster memory and thats why the 3D V-Cach CPUs have a terrible DRAM-Frequency scaling, because they get all the performance even with slower memory?
Thats exactly it. With more CPUs increasing its cache teh faster memory wont give better results. Its also whats makes 3dvcache so good on vlaue point. Get the cpu get a cheap/midrange RAM and you are set. You also dont need an expensive MB as well as the power draw is low on those x3d cpus.
Yes and no. x3D processors still scale with memory performance, they are just less sensitive to it but are bandwidth capped due to Ryzen's memory controller system's design. This means the addition bandwidth of higher frequency DDR5 (6400+) can't be effectively utilized, where as 13th/14th gen can scale with 7000+ DDR5.
No, the V-cache allows more data to be stored closer to the CPU cores for quicker access, and not having access system memory which has a higher latency. You would have to use very fast memory to notice a performance uplift because the CPU doesn't have to access it as much.
@@FTWGame0N Because that's not what op is saying. He's saying the cache gives you a performance uplift because of a bandwidth increase similar to using faster memory which isn't correct.
I mean it also shows how well the 5800x3d and 5700x3d is aging as well. The fact I could put a 5700x3d on my b450 motherboard for 200usd and get a 60% increase on just the cpu performance is crazy. Basically a new pc and in 4-6 years I can platform jump to the am6 platform when that comes out and it’s fantastic all around. Love the amd/intel competition
@@jimdob6528 Sure! If someone is on the AM4 platform and just want to get the last boos out of it, it's an amazing deal and value. For people buying new gaming rigs, I can't see a valid reason to go with anything else than 7800X3D.
This puts a spotlight on another thing... How minuscule 14th gen uplift actually is compared to 12th gen. Assuming if we put same ddr5 speeds on both 14900 and 12900 the difference shown here will only drop more.
I‘m really happy I went with the 7800X3D about a year ago. It’s an absolute monster, especially paired with a nice kit of 6000 MHz CL30 memory. The craziest thing is, that it runs at 40-70 W while gaming and at a max of 90 W in an all-core workload. Also gives you a nice upgrade path to AMD‘s new X3D chips down the line.
Unless you're running a stupid high end GPU at stupid low resolution you'll be well and truly set until Zen 6 X3D is being pushed down in price by Zen 7.
Liked this video, it is much much better to understand when comparing to older review and see the differences than just saying it how purely it is right now with top of the line GPU etc, really like these comparisons, gives us great information for newer builds, keep it up please, greatly appreciated!
I just purchased a 5700x3D, upgrading from a 3900x and I feel it was worthwhile. Keeping my old system running for a reasonable price is more important than having the absolute fastest thing possible. I only have a 3070 after all. Honestly, I think the 1440p display was a mistake, staying at 1080 would yield such better performance.
@@obeliskt1024he/she did get the 5700x3d. Unless you are asking why he/she didn’t get the 5800x3d? If so it’s 100-120 dollars more expensive fore less than a 5% performance increase over the 5700x3d.
A great follow up on this would be how much memory timings such as latency settings (such as CL) and MT on DDR5 affect the chips. I know the answer is "not a lot" and "it depends" on the CPU, but given the premium some of the extremely high end kits cost, but also the flood of kits in the 6000-6400 MT space, I'd love to see what's the uplift of a CL36 6000MT kit to a CL30 kit to a CL32 6400MT kit for example (particularly since the CL30 and CL32 kits mentioned are technically the same overall latency at 10ms)
LGA1700 is unique among current platforms in that we can use it to make direct DDR4 / DDR5 comparisons. Makes one wonder what we could achieve IF it were possible to modify a Zen3 CPU and AM4 board to run DDR5 memory.
They did make the mobile 6000 zen3+ cpus that support ddr5 and lpddr5 so I it does technically exist (and also some zen2 mobile ones that support lpddr5) - not sure if there is an easy way to compare them clock for clock to the zen3 ddr4 5000 counterparts tho I guess if they ever end up with a lot of leftover dies maybe they could make an ultra budget am5 APU based on them or smth
@@olnnn Sadly not as easy to compare considering that Zen 3+ is only exclusive to APUs and it was quite clear that 16 MB of L3 Cache reduced Zen 3 performance significantly compared to regular Zen 3 32 MB. But it was enough to make them quite competitive with mobile Alder Lake CPUs.
@@ismaelsoto9507 Yeah the cut in half cache really hurts the AMD APUs in gaming so at least the older ones basically ended up being about roughly equivalent to the previous gen non-APU cpu with the same core count once paired with a dGPU in performance.
@@kesamek8537 People have been saying what you just said for the three decades I've been in the PC space. Every time games feature more realistic physics, NPC logic and environment interactivity people expect it to come at no computational cost, and hence it must be "badly optimized".
@@THU31 I didn't say that people were wrongly claiming that games that have what we already had, but runs slower, are badly optimized. Because those obviously are. If game A has all the newest stuff in abundance, and is still faster than game B, which has yester-years stuff, then clearly there is something unusually wrong with game B. And yes, Starfield is one of those. That said: Starfield has really shitty textures wrapped over pretty high polygon objects, which makes it look meh compared to what it actually is. And that's just plain stupid. My main sour grape with it, though, is that it is utterly brutal on the CPU for no apparent reason.
amazing comparison! I'd really like to see some comparisons with ddr4 2133 at some point. I'm running 4 sticks of ddr4 on my 13900k and I'm wondering how much performance is lost. A lot of my colleagues have the same issue, running at ddr5 4800 and ddr4 2133 due to high memory requirements. I could upgrade to 2*48gb ddr5 to run ddr5 6400 or higher, but I'm not sure if it's worth it. Thanks and have a good day!
I don’t think it ever was about clocks as two CPU’s with same core count and speed could be completely different. Instructions per a clock is a better way to look at it as this helps you understand how much can go through the cpu per a given clock. It’s like two different size rivers going the same speed if that helps.
Great video not sure if I missed it or not but what was the spec on the DDR 4 3600 was it CL1 or CL2 and also if I’ve read correctly isn’t ddr 4 3200 cl1 better on 12th gen as it keeps the ring bus in sync and isn’t that better?
The big reason I spent quite a lot and went the DDR 5 kit (6000MHZ CL 36) back in early 2022 for my 12900K - it makes very large impact on performance for gaming, especially after I upgraded to RTX 4090.
@@BenStatemaybe you didn't, on average of 13 games 22% improvement, some games went high as 33%. In my opinion as I already allocated high budget for PC, it was worth it - 22% is what I expect gen on gen improvement for CPUs. Was it critical? Of course not, but as I'm building highend PC it was worth the high price back then.
@@BenState if you compare just delta of the RAM it was around 200-250$ more than fast DDR 4 which would make it around 10$/frame. if you compare how much it added to the total system cost it would be probably less than 10% back then (as GPUs prices were high , add case + fans , psu cables, CPU + cooler, ssd etc), as I said for highend system I was building it made sense for budget system it wouldn't.
Fantastic video and content, thanks! In my mind, I knew DDR5 was going to be faster but this video helped better visualize just how much faster it is depending on th scenario.
I see no reason to upgrade my 5800X3D system for another 1-2yrs, paired with a 6900XT and 32GB 3600Mzh DDR4 it can play anything I throw at it @ 1440p with a more than acceptable framerate... throw in FSR and RSR and it's a more than capable system... Once I'm seeing new games that I play in a constant sub 100fps avg. I'll start looking at a rebuild... I could always throw in a GPU upgrade to push that out until late 2026 if I see a bargain 2nd hand card for sale. But when I build a new system these days, it's so I can get at least a 100% performance improvement over my previous one. Value for money has always been my primary concern over bragging rights... I'll buy top tier stuff... just not at top tier prices. My 6900XT was 60% of the retail price in 2022, my 5800X3D was actually less than the 3800X it replaced. I'll stick with a platform that has proven longevity... intel don't know the meaning of the word. AM4 had 6+yrs, as did AM3+ before it.
This is exactly what I am doing. I got a used 7900xt for 580 on fbmp but otherwise I don’t plan on building a new pc till am6 comes out in 4-5 years. I rock a 5700x3d but my brothers 5800x3d only seems to show a 4% increase to performance so I am happy with my build.
As long as you stick with a 6900XT upgrading the 5800X3D isn't really necessary unless you suddenly fancy 1080p 480fps gaming or something. The 6900XT isn't that high-end anymore nowadays. Mid-end is a bit insulting, so somewhere in between. And that's where the 5800X3D is still adequate.
I'm running a similar 5800X3D/6800XT setup on X370. I may opt for one more GPU upgrade but lacking pci-e 4/5 is starting to limit my storage options. I may be tempted to upgrade to AM5 depending on what X870 brings to the table.
@@ferox63 you can use gen 4 and 5 on a gen 3 slot/bus, seriously doubt youre going to notice a major difference in practice unless you bench it. Being stuck on a gen 3 storage is one of the worst reasons to upgrade imo.
Great video Steve. I'm 12th gen ddr 4 and really been wondering how much that has been affecting my system but estimating 10%. It's Alot more apparently.
For Intel sure, but Ryzen sees almost no improvement and the X3D parts see virtually no improvement at all. PS: this is also DOUBLING the frequency and getting 10% to 20% for 13th/14th gen and 15% to 25% for 12th Gen.
The thing is.. in the right context that advice is actually good.. however it doesn't apply for every application... and THIS should be considered for all tech advice... there are very few universal truths
There is Difference but not that much here the difference is big because they are comparing DDR5 with DDR4 which is huge but if you overclocked like 200mhz you won't see big difference.
I upgraded from 9900ks @ 5.1 ghz all cores to 7700x 5.65 all cores ghz to now 7800x3d stock. In terms of power/ throttling, long game sessions, minimum ambient temperature delta changes and small form factor build the 7800X3D is awesome especially with 50 watts of power use during average gaming sessions.
I'm certainly anticipating a couple things. Will Arrow lake be Intel's turnaround point, and how is this new CAMM2 going to perform. I'm certainly looking into trying both out on my upcoming build later this year. My laptop is going on 5+ years old now, and I've decided to go back to a desktop build.
Great video, as always! It's very interesting that in some cases ddr4 is enough of a bottleneck to effectively reduce the 12900k to a 12600k. In spider-man remastered the 2 chips are almost identical with ddr4, while there is a massive delta between them with ddr5. I wonder if we would have seen similar results for ddr4 vs ddr3 if we had the platform compatibility at the time to compare them head to head like this. Fascinating stuff.
Assume that they are 1-2% slower than 14th Gen? Doesn't change the conclusion of the video which was mainly about memory scaling on intel boards and 5800X3D
I gotta be honest this is the first time I made it in steve videos to the end 😂 while that is always the case in tim videos 😅 so good job on this one. Thx steve
Mehr, considering they would've had to buy a second kit of ddr5 a few years in to get any of those performance gains I'd say it was still the right decision.
So was this test done solely on Windows 11? Since Windows 11 utilizes a different CPU kernel scheduler, this _does_ affect CPU performance on last gen processors. Windows 11 also has a new memory allocator, which could be adversely affecting DDR4. I'm interested to see this same test done on a normal Windows 10 install vs Windows 11 install (and possibly Linux, if you guys feel like it).
It would be interesting to see a video on upgrades for those who have an i9, i7, and i5 12th gen cpu paired with ddr4 ram. Particularly swapping mother mother borad and ram to ddr5 or upgrading to a 14th gen i9 or i7. Which gives the most performance uplift for the dollar?
Will there be a significant performance advantage to have 16 lanes of Gen 5 pcie lanes for the next gen GPUs and having a full 4 more lanes for M.2 main drive direct to the processor in the next get Intel platform? Worth the upgrade from a z690 platform? Thoughts?
This is what confuses me about what RAM speed to pair with a CPU. When I look up what the max speed are for DDR5 memory for the i5 12600k it's less than what you're running. So when I upgrade to a DDR5 board should I just go ahead and future proof my memory and go over what my i5 12600k is able to use? It's gonna just run speeds that it maxes out at right?
I have another video idea and this mainly originates from previous reviews of the 5800X3D, where when the settings are lowered, the X3D performs much better in v-cache sensitive titles than non X3D CPUs. This was first seen with ACC where the 5800X3D was stonkingly fast at medium settings but when the quality is maxed out, the X3D benefits diminish a little. What would be interesting to see is if the same applies to the 7800X3D at settings other than epic. Worth investigating as gamers, especially sim racers won't be using max settings when racing to keep FPS as high as possible, but I'd be very curious if other games respond the same.
I got a 5800X3D and I actually experimented with my DDR4 4400 Cl26 kit today and found that reducing the frequency even below the "sweet spot" 3600 is worth it if you get the cas lower. Just for kicks I left my MB to "auto" on all settings and it defaulted to 2666 CL20 and it improved my frametimes and fluidity a lot. I guess latency is crucial for X3D
Whatever 5800x3d content you want to produce, I want to see. Even if it is just five minutes at the start and you move to a different topic, I'll be here for the whole thing!
Curious to see how tuned B-die compares to DDR5 on the 14900K. I’m running mine on a cheap Z690 board at 4400CL16. Massive difference compared to XMP, especially in the 1% lows.
I do wish I had the 3D vcache. 5800X3D totally makes sense if it's affordable and those are on the platform. 12900KF has been great with DDR5. I can't speak to DDR4 but it's an option for budget builds at least. It depended when you bought into the platform, 399 CPU, 170 Z board, 90 DDR5 replaced with a 48gb 115 kit in cost. I do wish I had done a 3700X build then upgraded to a 5800X3D but oh well.
I bought a 12900K with a DDR 4 motherboard and ram, because DDR5 and DDR5 MB's were expensive. It's good to know that there is still room for more performance if I buy a Z790 + DDR5 memory. Thanks Steve for the insight
Hey Steve, seeing your data, I think it's kind of interesting now that if you own a 12th Gen Intel you now have two paths to upgrade your system, either go all the way to 14th Gen (if you own 12600K or 12700K) with the same DDR4 system and 12th gen mobo or just upgrade your motherboard and memory to DDR5 and keep your CPU. I wonder if you would be interesting to making a video about that. I think most 12700K or 12600K owner would be on DDR4 platform (DDR5 was unreasonably expensive by the time these parts were out and your channel also heavily reccomended it yourself), and if upgrading your system can net you 10-30 percent performance increase it's definitely a worthwhile investment.
A video I have been longing for a long time: How much ram speeds/latencies affect iGPUs, specially for the newer AMD iGPUs like the Radeon 780M which show great performance.
I grew up with old powermacs my father updated, a big fat L3 cache, as fast as you could get it was utterly pivotal in getting newer software to cope well with an old aging G4 system. Things certainly have changed, just generally the only actual drawback to a large L3 cache is manufacturing costs... and in Ryzen's case; suppressed clocks due to thermals. (most older cpus with large L3 and L4 caches had them placed adjacent, not on top of the cpu core/s.
One of my rigs has an a750 and it's given me no issues. Point was to deliver 4k 120hz which it does with no issue. Even had it in a razer egpu box using thunderbolt with no issues
@@LiLBitsDK I was looking to upgrade from ddr4 but the upgrade wasn’t feeling significant enough as I already have decent mobo and RAM so for me it was between spending 190 bucks to get a 5700x3d or to spend 450 bucks to get am5 + 7800x3d + ddr5 mem . If the 5700x3d wasn’t available I would have upgraded to am5 as the 5800x3d feels too expensive for around 280-300 $
@@LiLBitsDK yes I agree I would have gone with 7800x3d if I had a faster gpu , but I only have a 4070 and I noticed low cpu utilization especially when using ray tracing. I really wanted the x3d cache more than the IPC gains