Yes (but also more seriously based on the video it seems like they were spending a lot of time revamping their methodology and developing the automation processes)
@@deymas3221 The info is in the article linked in the description: "Our test system for the 285K and 245K is based around MSI's flagship Z890 MEG Ace motherboard, with supplementary testing on Asus' beautifully premium ROG Strix Z890-F Gaming WiFi. We tested with Trident Z5 Neo DDR5-6000 CL30 RAM, an NZXT Kraken Elite 360mm AiO and an NZXT C1200 Gold ATX 3.1 1200W power supply. Our graphics card is the Nvidia GeForce RTX 4090 Founders Edition, while storage duties are handled by the 2TB Samsung 990 Evo Plus and the 4TB WD Blue SN5000 PCIe 4.0 NVMe SSDs. "
Lol and Zen 5 absolutely CRUSHES Zen 4 in power efficiency, meaning the R7 9800X3D is about to CURBSTOMP the Ultra 9 285K with like under a THIRD the power draw!!! 🤣
@@Terepin64 No it's not. If you don't let the chips off the chain, Zen 5 has very notable power efficiency gains over Zen 4. Especially when the chips are being pushed full tilt. (It's ofc a much smaller difference in workloads like non-multicore CPU heavy gaming or basic PC use, but even then it exists). That's what let AMD drop the R5 & R7 TDP's before everyone revolted wanting more performance without any care for a power increase. If you unleash the power draw beast on both architectures though, you are very much right in that the power efficiency gains mostly disappear, but most people don't run with PBO set to "push it as far as the silicon can handle". 🤷
@@Cooe. This is a textbook example of moving goalposts. You claimed that Zen 5 is significantly more power efficient, to which I said that it is very similar to Zen 4. And what was your reaction? "If this" and "if that". I don't care about ifs. I care about facts. And the fact is that 9700X is in no way, shape or form significantly more efficient than 7700. In fact, it's worse in gaming.
Yall clown amd... but unlike Intel we knew that faster zen 5 modules were coming from the start. Can't get mad at a company who told you that they have gaming centric parts coming but you the user don't want that, you want the vanilla to be what the gaming parts are.
@@sspringss2727 even if we ignore the fact that they are actually worse, not on par as promised. That's just sad. For your sake I hope you're a troll. Still sad, but maybe not as sad.
@@pobbityboppity1110Honestly neither AMD nor Intel are better. Also when the whole burning CPUs happened also AMD was at first like contact the mainboard manufacture. Took a lot of community for something to happen there. The same with Intel. It also took them for ages when you think how long behind the scenes CPUs have been dying (I for instance got a replacement instantly so absolutely no problem there)
@@No1l After what Intel did for 13th and 14th gens instability and "lol not replacing broken products", I aint going back to intel until they really justify it.
It entirely depends on architecture. AMD is more sensitive to RAM than Intel has been for some time. Intel might have already figured out while engineering their chips that they benefit from more cache only to a certain extent and then it becomes diminishing returns for more cache with high chip costs.
3D cache is not a "one-size-fits-all" solution to Arrow Lake's woes. Level 1 Techs mentioned some egregious memory latency with the chips that could easily be fixed and rumors of a slow ring bus impacting performance abound too. 3D cache is great for gaming, but has negligible impact on many production workloads. They still overcharged for this series though, especially the 285K.
I upgraded my CPU last yesterday with 5700X3D from 3800XT. Very happy with the results. I have it paired with Asus 4070Ti and G.SKILL Ripjaws V Series 64GB. Very happy with my setup. Games do not stutter anymore and I'm getting better framerates.
Intel made a big mistake with this launch. These CPU's are clearly not ready for market sale. They're essentially stacking negative reviews for a product line that could turn out to be good some day after updates and tweaks.
And it is a terrible choice of a generation to reset naming scheme. Now people will think that Core Ultra are some experimental handicapped products for niche uses.
@@ezaike335 Good thing Max has a nice cushion from the RB20 performance when it was domination for the first few races. His WDC is close to him if he doesn't do badly for the remaining races and the car doesn't get more problems during his races. It's a bit of cliff edge for him. Anymore fall offs like 2-3 races like back to 5-6 and it'll be close.
There's only so much CPU performance we can chuck at an engine before we hit the theoretical limit, devs need to start optimising their shit properly and start utilizing everything we can throw at it.
They have been trying to optimize games for multicore since the xbox 360 so 20 years ago and they can't do it well because games are single core limited.
The whole "hurr durr just optimize" line is only trotted out by halfwits at this point, who think a game like Flight Sim 2020 would run fine on their 2500K if only devs would just oPtImIzE!!1 The reality is that for every terrible PC port with poor CPU utilization, there's a truly groundbreaking game stretching at the limits of what's possible with current hardware. If you want games to keep evolving, hardware needs to as well. There is no "theoretical limit" to CPU performance where we need to give up. AMD and Intel just need to deliver better products, rather than these half-baked architectures that are designed for server applications first and the desktop a very distant second. Zen 5 was a massive leap in datacentre applications, despite the lukewarm consumer reviews. There's plenty more both companies could be doing to serve the gaming market - they simply choose not to because there are bigger profits to be made elsewhere. Nvidia is exactly the same on the GPU side of things.
@@CaptainKenwayit goes both ways but videogame devs really aren't often optimising well. This is because they don't keep staff around to build proper experience and things are always constantly changing.
Inbuilt benchmarking sequences are not a realistic test of of CPU usage. Automated camera sweeps of cpu hot spots are interesting though - that’s much better
Those benchmarks are great! Im glad youre spending more time on the actual interesting parts and not having to tediously run those benchmarks yourself. Thanks for the video, Richard
We might be getting close to the limits of silicone manufacturing process. Games will have to build with wider core counts in mind. A lot of games still only uses 4 cores, and if it's DX11, it won't even look at 12 cores or higher. They sit there idle, or might run at 10%. I have a 5900X and starting to become CPU limited with certain titles. Thing is, my CPU is only being ultizied about 41% in certain titles. Look at to see what's going on, and it's hitting cores 1-4 hard at 70%, cores 5-8 around 40%, and next to nothing after 8 cores. My third PC has the 8600K installed, now that hits 60-80% on the regular with only 6 cores. Most of the work still being done on cores 1-4 with hyper threading.
I love the performance being normalised in percentage to the product being tested - it's so much clearer to read and compare than ever changing FPS numbers - really surprised to see this for the first time only now.
Well, there's two things that could come into play here: a) Tile Based Approach being basically chiplet-technology which introduces massive increases into memory latency, b) removal of HT on the P-Cores might additionally fuck up performance. Unlike productivity tools games won't run well on the E-Cores so you are effectively bound to half the threads that now have 80ns+ Latency according to AIDA64 tests which is over 30% higher latency. If i had to guess thats where my money would go and nothing is going to fix that apart from a new Arch. I think this arch could go well into Laptops as with very low TDPs its still performing impressively and better than any AMD offering there. For Desktop and Highend Gaming not so great.
All recent CPU launches seem pretty crummy honestly. Usually I would say if you're building new to head for the newest platform, but if you're on a budget, AM4 with a 5700x3D is dirt cheap and super performant still. By the time you'll really need an upgrade a newer platform will likely be on the horizon or out, and like I alluded to, performance from current launches isn't exactly jumping off the charts.
AMD: ryzen 9000 has a 5% uplift in gaming. AMD post arrow lake: ryzen 9000 has improved 10% compared to the competition in gaming. At least we expected this but still does not look good for intel at all.
For the health of the industry I really hope this is sort of a Zen 1 moment for Intel. Taking a performance hit in the first release but laying the groundwork for better things to come. Otherwise this could be very bad for competition.
UM... has Intel basically been focusing on MOBILE designs and then cranking up the frequency/voltage for desktop which blows out the power requirements? Because it sure doesn't feel like this is optimized for desktops. At this point "e-cores" still make me laugh when the CPU power usage is through the roof. I'm guessing the "APO" optimization thingy is focused around managing POWER/temps more than anything else. Again, a focus on MOBILE?
I'm old enough to remember when intel used to dominate the CPU market with its 14mm rerererererefresh CPUs. How the turns have tabled when you're at the cutting edge of innovation with your 14mm CPUs
Yup. They were top tier. But once amd got Ryzen kinks ironed out they’ve been behind. Maybe we’ll see intel do the same with their new architecture/chips
I too can remember the far away times of 2017, when digital foundry had to inscribe benchmarking results on a clay tablets, and have them delivered by pigeon.
@@leong4352 At least 10th gen 14nm had good products. I wanted to see 10 core cpu on Raptor Lake IPC, no e cores, then let end user tune the system. Now there's nothing to do.
Really cool that you guys have an automated system setup but If you guys could either slow them down a bit or make the playback Picture-in-picture or something. I was getting a little motion sick especially on that Cyberpunk one and I'm not usually even susceptible.
Really not surprising given Intel's messaging pre-launch. AMD slammed Intel on value workhouse cores CPUs way back and the complacency was laid bare back a few years ago. Now Intel has walked away from the power/temps dead end road to attempt to make their CPUs efficient (remember corporate OEMs don't want support headaches on high end builds) and that early gaming dominance is now shaky. Add in how terribly this launch was marketed and this gen is PR nightmare at best. It's all about price nowadays really.
These DF videos are the go to for benchmarks and comparisons. I'm glad we finally have some for these Intel CPUs, but can anyone explain why we haven't had videos like these for the recent Ryzen CPU releases?
AMD have got this cracked. There are two market segments for desktop CPU's - gaming, and productivity (mobile/laptop cpus are so good they can operate as boring office minipcs now). X3D for gaming and 9950X for productivity is a perfect combo to optimise the space.
@@Clockwork0nions How is it a mistake? From what I can see the new chips are still performing well for productivity without it, and games were never written to benefit from SMT/HT anyway. What am I missing?
Looks like AMD is reclaiming the throne in gaming CPUs. The last time I owned an AMD CPU was during Athlons reign. I am looking forward to upgrade my Intel to an AMD soon.
Prescott, Rocket Lake, now this. There's no-one quite like Intel when it comes to performance regressions with their latest and greatest architectures. (remembers Phenom I, Bulldozer, and Zen 5) Well, almost no-one.
Intel has acknowledged the findings of PCWorld and other reviewers, confirming that performance decreases when applications are operated in Windows' default Balanced power mode. In which power mode were your tests conducted?
1:29 Are those power draw figures correct? If so are you realising what you're saying about those results? 285K at 423 WATTS is "comparable" to the 9950X? That's not "compelling" at all.
Even worse is that entire mid-range desktops ten years ago only required a 500W power supply, making any claims of sustainability by Intel or NVIDIA null and void.
the power reduction is actually nice but then intel decided to price these as if they were also performance leaders and ruined it. the likelihood they're only gonna support this platform like a year or two and that there's already better / cheaper options from past gens makes these chips pretty worthless for gaming purposes. i'm ready to move on from my old 8700k and i wouldn't even think for a second about swapping to arrow lake. what a lousy launch
Happy you guys can communicate salient points in under 12 minutes. Have to watch other channels 30-40 minutes of meandering over minutia to get to the point.
Probably the thing I like most about this channel too, besides all the obvious. What takes other channels half an hour to spit out these guys can do in 15, usually far less, and way more effectively.
Excellent video and I really like that you made the point about testing being a snapahot and it is important to look at a variety of outlets to xover more scenarios.
I wonder if the better efficiency of Arrow Lake would make it more interesting for gaming laptops and handhelds. Not better performance, but I assume better battery life. That's something you will look into once we get such devices, right?
The problem games are way too low and inconsistent for this to be an architecture problem. This is most likely something on Thread Director that is not playing well with Windows.
I'll be interested over the coming weeks/months if Intel can get these 20xK series chips to 14th generation performance at the very least. I wonder if it has to do with the higher memory latency (which apparently was around 80-120ns). If this can be solved with a microcode update or something else then it would be on par with Zen 5% in terms of efficiency improvements with near similar performance. Right now it seems all over the place which is strange. Maybe the compute heavy tasks that have better memory access patterns are the ones running well, and the ones with less than optimal memory access which do better with 3D v-cache are the ones that really bring down it's performance because of the memory latency? That's really the only thing I can think of. I don't think it has anything to do with the scheduler because that was sorted out quite a while ago with big.little architectures a few generations back. If there are synthetic benchmarks that test memory latency (L1,L2,L3,Main) we could do direct comparisons. I also heard that there were driver issues with regards to graphics cards (involving instability), so maybe there's something there.
It’s interesting because I had recently purchased an Alienware gaming laptop that has a core ultra 9 185H in it and just like everything I get I began to do a deep dive into the CPU architecture from article on the internet. The interesting this is the 185H has multi threading but intel abandoned it because it was as efficient as the hoped it would be. When you compare it to the laptop variant of the 200 ultra processors the newer one is better at lower wattage but the 185H pulls ahead at higher watt. It makes me wonder if the older architecture would have made a better desktop chip.
Seems like both Intel and AMD released their new CPU architectures before they were ready. And while that just resulted in disapponting gaming performance gains for AMD, it resulted in the same for Intel alongside stability issues. I think Zen 5 and Arrow Lake should have launched in early 2025. 9800X3D looks like it'll be interesting.
Zen 5 would at least boot up properly. Ultra 285K doesn’t even boot up! Leo @ Kitguru spent 15 minutes @ the beginning of his review just to show how to boot ARL up!
I have decided to go with AMD finally, i will wait for 9800x3d, 9900x3d and 9950x3d to see which has better performance for productivity and then i will go with that. Already decided on the motherboard (Asus Rog Strix x870e-E gaming wifi) and 128 gb of corsair vengeance (or dominator in same speed if it has) ddr-5 6600 cl-32.
What happens when you disable the E cores? Do things improve and stabalize? I'd imagine a combination of bad scheduler, bad cpu microcode, maybe increase in P and E cross communication latency. Would also be interesting if there is something to do with lack of HT, and maybe OS scheduling? Would be interesting to also disable HT on 14th gen processors. Run some of these games on Linux. That would point out if there are some obvious OS scheduling issues. The chips seem to do pretty great in productivity. The gaming is weird because games can be bad, and they are sorta all over the place. So something has to be going on.
I could use advice from anyone who truly knows what he's talking about. I finally want to build a new gaming PC after 11 yrs (Feb 2014). I've been putting it off for ~4 yrs for various reasons, but now the latest reason --- thinking the next Intel gen would be much better for gaming than "Big/Little" --- went down the tubes. It's _still_ "Big/Little" but now with _no Hyperthreading_ and a _lower_ max frequency. If I don't want to go with AMD because I mostly play older games and don't want to deal with occasional performance/compatibility issues, and I need the best single-core performance possible, what CPU should I go with? I had already seen the writing on the wall, but this situation is anything but joyful.
Intel's fumble being sad and all, but can we get access to these benchmarking mods? They seem super cool, and it would be nice to be able to compare results to DF benchmarks with overclocked components.
Been an Intel guy since my first computer back in 91. Had a z890 motherboard and 285k pre-ordered but once I saw the benchmarks I canceled. First time in my life switching to AMD. I'm not mad, just a little disappointed. Intel is definitely moving in the right direction but there's obviously some maturity needed. This is the walk before you can run step. I'll see how they are in the next gen.
AAA games are disappointing. CPU progress has stalled. GPUs are incredibly expensive. Many games are unoptimized. Game studios are shutting down, leading to significant employee layoffs. We’re seeing unfinished games priced at $70. There’s widespread frustration with the DRM implemented in many titles. The goodwill towards publishers and studios is at an all-time low. Outlets like IGN and mainstream journalists often give 9 out of 10 ratings to mediocre games. Fans and developers are arguing on Twitter. There’s a lack of innovation in high-budget games. Exceptional games are being remastered and re-released, but they're often dumbed down to appeal to a general audience, losing their charm in the process. Publishers are neglecting their older intellectual properties. Corporations are spending billions to acquire other companies instead of investing that money in developing video games. Everyone seems to want to create a Fortnite clone. There are numerous copies of souls-like gameplay that lack true passion and creativity. Genres like racing, horror, beat 'em up, single-player FPS campaigns, and stealth games are being ignored. There’s franchise fatigue with series like Assassin's Creed and Call of Duty. Meanwhile, Nintendo continues to create exceptional games on tablet hardware. The gaming community is feeling increasingly alienated, as engagement often feels transactional rather than genuine. The focus on live-service models has shifted attention away from crafting standalone, cohesive narratives. There’s a growing demand for more diverse storytelling and representation in games, yet many AAA titles stick to safe formulas. The rise of microtransactions has left players frustrated with paywalls in what should be complete experiences. Every Big studio migrating towars UE5 and unreal engine is Stutterengine Can anyone sense the calm before the storm? Four years ago, when the PS5 and Series X launched, nobody could have predicted that by 2024, we would be looking at indie games as our saviors.
Lol it was way before PS5 and XBOX X. It was back More like the PS3 and XBOX 360 era started the whole "Release now and we can fix it later". When devs discovered that console gen with being the first of wide spread internet on there meant patches and updates can be done on them by downloads. As for representation we do get it. It's just most of them badly done. Some can be great though. BF5 shows how much representation and also how badly u do a game. AAA gaming in general has been mostly meh even before COVID.
You gotta unplug from the internet discourse bro. We actually have too many bangers to play and too little time to waste arguing over the internet. Like this year alone we got Final Fantasy 7 Rebirth, Astro Bot, Metaphor ReFantazio, Elden Ring SoTE, the long awaited Dragon's Dogma 2, the best new installment in Tekken series since forever, Helldivers II, Black Myth Wukong and Stellar Blade from the newly emerging impressive East Asian AAA scene, Senua's Saga which was a true next-gen graphics showcase, even Silent Hill 2 Remake seems to have turned out unexpectedly good and the upcoming Monster Hunter looks phenomenonal too. All perfectly playable & look great with a standard PS5 or even a much cheaper Series S for most of them. Not to mention an insane backlog that most gamers probably have... I can't even find the free time to play every game I want let alone be dissapointed in lack of good games lol.
AMD, I know you're not a charity company, but damn thank you for the AM4 platform and my current 5800X3D, I'm still happy and am completely satisfied with this processor ; )
Are you testing with 24H2 Rich? As if not it would be interesting to see if AMD eeks ahead a smidge more in some of the tested games. Though from what I've seen from other outlets they are reporting the Intel Ultra CPU's are slower with 24H2 vs 23H2 and below apparently
I think Intel should have released Arrow Lake as a XEON to not waste the RND but it is not a consumer focused CPU where gaming is a big part of buying.
Disappointing, because I was planning on upgrading from my i7 9700k to the 265k or 285k. Perhaps AMD and their upcoming 9000X3D cpu will be a better upgrade.
I don't get it either. Back in 2005, I played Kameo on my Xbox 360 and that game had no problem displaying hundreds of enemy orks at the same time on screen. That game is almost 20 years old and still blows many "next gen" games out of the water with the amount of stuff happening on the screen simultaneous. In today's games the game worlds are also much more static/ less interactive.
Looks like I'll be keeping my 5900x for another couple of years unless the 9900x3d/9950x3d turn out to be surprisingly impressive. Not holding my breath for this generation though.
Some reviewers are getting better performance in Cyberpunk 2077 than a 14900K with the 285K. With some, it's the worst CPU tested and even worse than 12th gen. I wonder if these differences are caused by Windows versions or the High Performance/Balanced Bug noted by other reviewers.
Intel now being a victim of their own excess is quite poetic. Let me explain... Intel has been struggling to keep up with the improvements AMD has made to its Zen architecture. For a while, they just kept increasing power draw to make up for the performance they needed. But they reached a ceiling with the 13th and 14th gen chips. So they completely redesigned their processors for the Core Ultra to move to an AMD-inspired chiplet design, which has seen some major efficiency gains across the board. So then, why isn't the Core Ultra being received with the same praise as Ryzen was in 2017? Because the last AMD desktop architecture before Ryzen was Piledriver, which released in 2012. AMD completely devoted its resources to developing Ryzen for about 5 years. Intel released the 14900KS earlier this very year and the 14900K about a year ago. This meant Intel didn't have time to both increase efficiency and make it a match for performance in so little time. This launch was rushed, and Intel's own older CPUs are showing it quite clearly. Intel would have been better off taking the 'L' like AMD did a decade ago, and stop releasing new flagship CPUs while it worked on the new more efficient architecture. If we remember back when Zen1 was released, it still wasn't quite a match for Intel on single-core and IPC metrics, but it was so much closer than their last architecture, and it seemed so promising that people were hopeful it meant competition in the CPU space again. By the time Zen3 was released (AMD 5000), AMD had maintained its efficiency lead, while catching up and even overtaking Intel in Single-Core and IPC. I really hope this poor reception is going to act as a wake-up call to Intel and push them to slow down product launches while they work on making their architecture as efficient as AMD's.
Because the first AMD Ryzen processor at least offered way superior multi-thread performance compared to similar priced Intel options in those days. This offers, well uhm nothing compelling really.
They will never learn. As long as they have Pat Gelsinger as CEO they will keep going downhill. When a CEO calls NVidia "lucky" for succeeding in the AI space you know he lacks any strategic vision. Intel is doomed.
I'm def sticking w/ my 14700K+4090. Might consider Intel Bartlett-S if its a nice boost over 14900K and I get to keep my mobo. Fortunately no need to upgrade to a Nvidia 5000 gpu bc the 4090 is still held back by the cpu. We need more cpu performance.