"And after a lot of 14th gen were sold, procced to support other 3 games. You want support for all the popular games, maybe at 15th gen. Or maybe not. lol" - Intel
That legitimately fair argument aside my expectations have totally been exceeded with my 14700k. I was so concerned about heat and power use I went out and bought new fans and new thermal paste and man it wasn't even necessary. it runs pretty cool, even with my 240mm aio.
This might either hint at a fundamental hardware flaw in prior generations or needless product segmentation to artificially increase the attractiveness of the 14th gen. Either way you look at it, it ain't pretty.
There are fundamental difference between ''further optimization'' and ''fixing basic functions'', and imo fixing e-core leans more on fixing than optimizing. I understand Intel still need something to sell 14th gen, but still, in terms of GPU, it is like cutting driver support for 20 series GPU when 40 series come out, just underhanded tactic.
You're blowing this out. This is not the same as cutting support WTF. This is like NVidia supporting DLSS 3 only on RTX 3000 GPUs. Not adding a new feature to an older gen is not the same as "cutting support" or "cutting driver support". It's less support.
@@Winnetou17 That would be true if there isn't hardware changes between the 30 series and 40 series. 13900K and 14900K are the same cpu just clocked differently.
@@Winnetou17DLSS 3 only works on 40 series, not 30. And that's because it has specific hardware that lets the AI do its thing and generate frames with minimal artifacts and delay. If this new feature can be added to older gen hardware with no issues, then they should do it
12600k owner. i'm so frustrated. big.Little has never delivered on the behavior they promised, and now i'm being locked out of the fix. forcing me over to windows11 was not a fix, it was just aggravation. i early adopted the new arch because i really wanted to use an optane accelerator. intel quietly software locked 12th gen out of optane support, so when i built my system i spent an hour poring through the bios trying to figure out how to get it running and wondering why intel's web instructions weren't working for me. overall it's been a pretty bad experience, and one intel curated for me. based on my 12600k experience i'll be very reluctant to adopt intel proprietary technologies in the future.
Going to give a partial answer to your question. Metro was selected because they wanted a "test bed" for single player games, and another for fast multiplayer games. The permisse was if it worked with metro it would work on others. One of our own games was considered briefly and may end up there in the future. That said I am not sure if it will work in the same manner with 12gen, as they have less e-cores. I think the reason why it has been limited to the 14900 and 14700 is due both having 10+ ecores, what allows the application to put 4 or 6 of those on gaming work and the remaing just handling very light background tasks. As such I also do not know how much of an increase it would provide to the 14600 or even most 13th. That said, in regards just the way it works, it is quite impressive and can see it being applied to other areas that not games. That said realistically it will be extremely difficult for intel to cover hundreds or thousands of games- would love that to be the case as someone working in the industry I can see an application of this nature as a very handy tool that could ease optimization work in games .
it's still funny to me how one of the few use cases for e cores near launch in gaming was emulating optimized ps3 games (as in games that utilized the cell processor's small cores) so not even an official release, but a community project for emulation
Is there a follow up video to this? 12 and 13 are now supported. Also if you could elaborate on what this is actually doing and if it will potentially help games not listed if you only enabled in bios and install the MB DDT. I mention this because DCS (Digital Combat Simulation) was apparently affected quite a bit by the E-Core issue. The game would regularly show signs by loading extremely slow and becoming non responsive. After enabling in BIOS and installing DDT (not using APO), these issues stopped immediately. Whereas I use to have this issue happen multiple times per day, it has not happened at all since making the change.
It’s now compatible on 12&13 gen depending on board and available driver. It’s just an optimised scheduler as windows own scheduler isn’t great for the big little architecture of new intel chips.
@@MrAckers75 I think they may be disabling SMT in some instances as well, judging from comments from Intel employees around Lunar Lake. I would love to see a deeper dive on this once Arrow Lake drops later this year.
Definitely shitty that Intel is preventing APO for 12th/13th gen. Seems like such a great and easy way to garner goodwill from consumers. However, I dont really feel like I'm missing out on anything because I dont play Metro EE or RB6 anymore. Hopefully, Intel can perfect APO and it will include a long list of new games when 15th gen launches. Could be a really cool feature someday.
@@skydrake1833 Well they should, because brand loyalty is a real thing. And AMD is a legit competitor. I know people who have "switched sides" in recent years
@@MrJohannson They do need it. Intel is facing financial uncertainty. Though I don't think 14th gen is as bad as everybody else. I mean now you get 13900k performance for i7 prices... good for everyone with lga1700 that didn't already have a 13900k imo.
I see alot of hating on Intel, because if AMD were doing this, they would be getting awarded for it just like they did with MANTLE which did not deliver the performance frames in games that it was hyped up to be. The double standards that exist in the PC ecosystem just completely blows my mind, and it needs to stop. Both companies have F'up and screwed the consumers and gamers over with shady business practices, and when they get exposed, they both should be getting called out and not one getting bashed and the other one getting a free pass
So, I type this on 13900k, I had this CPU since October of last year. First things first, it is a wonderful CPU for workstation and gaming. It gets high fps. Here are some problems that most don't talk about: 1. Random heat spike to 70-75 Celsius from 48 C, even when doing borderline nothing as in watching youtube video. OR when sitting at desktop with no apps open. There is no fix. This is my 3rd motherboard with same issue. Only thing that moved over from previous builds is my 13900k, the rest was either altered or tested. 2. Random latency spike, which results in pretty big stutter. Every 2 minutes or so. Zero explanations on why it happens, but I speculate the scheduler cannot handle P-Core to E-Core variance adequately. In gaming. This is at 5.5 ghz P-Cores, and 4.3 ghz E-Cores. SpeedShift and SpeedStep disabled, and any other C-state is disabled, along with voltage limitations. There are those huge jumps in temps and random unexplained stutter in games. To put it bluntly, I love this CPU when it works, because it works phenomenally. When it doesn't it looks like a quick dubstep remix, and you could be doing barely anything in the game. You will see a huge spike in ms on MSI Afterburner frame to frame graph, and inherently the FPS dip will happen, for a split second. This happens with RTX 4080 and RTX 4090. I thought it was just my CPU, until I watched benchmarks and I would see that lone huge bump in latency in other benchmarks. Different channels. This is not addressed. I really think running cores at different frequencies is a bad idea, it cause stutter, just like SpeedShift and SpeedStep does. Which is why I have them off. APO = market segmentation.
Oh yes. Anti consumer practices. Got to love them. This just motivates me to keep running my old systems without upgrading as long as i can Edit: so i don't need to buy from those greedy companies...
I’m on a 12th gen and thought about upgrading to a 14th gen CPU, but if Intel are going to be this shitty, then it’s a much easier decision to just sit on this for another few months before moving back to AMD. Fuck ‘em.
So now the CPU needs a game driver. Wild. And the FU to customers is just rotten. Really hard to justify a future purchase based on this initial rollout.
One additional test that I think would be interesting would be running the games with APO disabled while setting the CPU affinity on the game process to only allow running on P-cores, either through Process Lasso or just manually changing via Task Manager. In theory that approach could provide a working optimization for 12th and 13th gen CPU owners who Intel is leaving out to dry with APO. I suspect it may still not be quite as good, as CPU Affiniity only affects the running game process but the GPU driver code may still run on E-cores, but even still may be an improvement over nothing at all.
... it's basically overclocking four E-cores. And they locked that down to 14th gen? They made a similar mistake with Optane, but this is even worse. I wonder if we could just rename the processes in other games with those of Metro and R6 and see if APO «triggers» somehow.
Intel is pulling an Nvidia here and is trying to bank on software features to sell future products, just as Nvidia is using DLSS the same way. So, I do hope that reviewers take the printed "review guidelines" offered from manufacturers and set fire to them, and then proceed to test the hardware in the raw state without DLSS / software fixes. Basically preventing them from using it as a crutch to ship less performing hardware, relying on said software feature to work.
i dont have those games on Linux but im sure you can have the same behavior by locking the P cores in WINE using an environmental variable and it does give a better boost VS just disabling the E-Cores on Linux and the Frame time is a lot higher VS just disabling it or just Disabling E-cores. I think the same solution can be done with the 5900X3D and the other one that has the 3d v-cache on a Node
@@JonathanSias you just add WINE_CPU_TOPOLOGY=4:0,1,2,3 %command% in the run command of the game that you want to play. The 4 is the amount of cores you want to use and the numbers after is the location of the cores starting from 0 for core 1. So if you want to you can also disable hyperthreading by either skiping ever other number eg: 0,2,4,6 or skip the other half 0,1,2,3. E cores are usually the latter number of cores and AMD x3d should be either the first half or the latter half. EDIT: i forgot to mention that if you want to be always be running on Proton games/Steam on linux games you just need to add WINE_CPU_TOPOLOGY=4:0,1,2,3 to the /etc/environment in a new line as sudo/root
Imagine buying intel for gaming when the x3d chips from AMD exist. 5800x3d supported on 1st gen ryzen boards vs. can't provide APO support to 13th gen intel chips for no reason other than convince you to upgrade. Says it all really.
I was considering upgrading my 12th gen rig to one of the higher class 13th gen cpus with more cores and performance, but after seeing this, I am not. I suppose my options are to go to 14th gen or the switch to AMD and never look back.
If it's as simple as freeing up cache for the P-cores, couldn't or shouldn't this theoretically be done on the hardware level? Most background processes really don't require a lot of cache and rarely benefit from faster cache; and isn't the entire point of E-cores just to have something that uses less energy specifically for freeing up resources for the P-cores? Think of it like a turbocharged engine: you get more power when flooring it than a naturally aspirated engine, but the fuel consumption in relaxed driving stays roughly the same because you don't need anywhere near that amount of power for cruising at highway speeds. Honestly I don't get why Intel would even release E-cores before actually figuring out a way to make sufficient use of them for their intended purpose and APO only achieves that in a very limited amount of applications with a kernel driver that likely only works on garbage old Windows. Before you have any of that figured out what's the point of having them? Other than the obvious cinebench accelerator jokes.
Intel should make this feature subscription based with different tier, for example 5% performance increase for $9.99/month, 10% increase for 19.99/month. The moment your subscription expires your E-cores will stop working.
I only just noticed the Hardware Unboxed fractals remind me of the Metro Trains Melbourne ones. I'm not in the Discord so I wasn't sure where else to say this. Either way, I very much foresee people hacking Intel's tools to support those earlier processors too.
Would like to point out that I've seen a few games utilize the 7950X3D in similar ways to APO enabled (utilizing all X3D cores and a couple non-X3D cores but legitimately providing a perf. benefit) and it provided better performance than any combination of cache preferred, project lasso, game bar, CCD disabled, etc. A little bit unfortunate (but also cool?) that the best performance with these heterogenous chips may be somewhat complicated in that it uses all P/X3D cores and then uses a few E/non-X3D cores for smaller tasks.
AMD at least had a technical reason (BIOS storage limitation of the old boards). But they too could have anticipated the backlash and come up with the solution they eventually went beforehand and avoided the bad PR.
APO is the most interesting thing about "14th" gen, but it is very much an Intel thing to not support it on the functionally identical 13th gen... at the very least. I imagine the reason they aren't is because this is about the only party trick that makes the 14th gen worth existing, on a very small level. If they supported it on 12th and 13th gen, it would make 14th gen look even more pointless.
That why i stopped using intel! Ihave been building pcs since 2000s and i hav to buy a new mobo eveytime that i wanted to move on! Since am4 platform exist i was able to upgrade my 1700x to 3700x to 5800x and im so happy with that
As someone that has had the small budget window to upgrade my rig earlier in the year for my birthday, I'm now kicking myself for celebrating in July. COME ON.......13th Gen.
I noticed that by using Process Lasso and setting Efficiency mode OFF, some games achieved a 5-6% improvement. It's not much, but I think it's the same thing Intel does in a more refined way with APO.
I wonder.. if you could somehow fool the program, aka obsfuscate a core 13th Gen to appear as 14th gen.. Would be able to run APO and GET benefits? Reminds me when some stuff from Nvidia was locked to Nvidia only hardware via chip ID.
Intel APO is a driver-like optimization feature exclusive to 14th Gen K-series (and KF-series) processors that enables specific optimization to the E-cores to boost framerates. ------LOL
i was told they were flawless, godsent masterpieces of silicon when ADL launched... weird that we need some sketchy software to get two games to properly work.
One would assume this is indeed how E-cores were meant to work. But they’ve been a PITA for gamers. And now we’re in an era where we have to download “game-ready” drivers for friggin CPUs. I hope AMD don’t go hybrid. Edit: I shouldn’t have written that last sentence. Ryzen 7950X3D does require drivers to disable the non-3D cores for recognised game apps.
@@Noe2iq I don't even get how "game-ready" drivers are supposed to be different. Is that just if AAA_title_1: optimize_for_AAA_title_1() elif AAA_title_2: ... ?!
They’ve got to shift the refreshed intel 12th gen refresh somehow, but I’d guess they’d hopefully add it to the 12 and 13th eventually, in 6-13 months maybe Edit I just finished the video, no ok f intel. Once my 12600kf runs its course or isn’t able to keep up anymore, I’m going amd assuming they’re still competent by then
I see alot of hating on Intel, because if AMD were doing this, they would be getting awarded for it just like they did with MANTLE which did not deliver the performance frames in games that it was hyped up to be. The double standards that exist in the PC ecosystem just completely blows my mind, and it needs to stop. Both companies have F'up and screwed the consumers and gamers over with shady business practices, and when they get exposed, they both should be getting called out and not one getting bashed and the other one getting a free pass
Intel just want's to be nr1 in the bench graphs. Developing APO likely already is labor intensive as is. Intel already has enough mindshare i guess so they can screw their own customer base over a little.
Unless there is a source code leak or some hefty decompiling/reverse engineering by the community, I don't see this happening. It's quite easy to block off APO based on CPUID in the driver and not within application itself. Of course this can be circumvented as well, but the means and effort required would scare off like 97% of the potential users.
@@PK-lk5gs Could probably spoof CPUID requests in that case, which could be as simple as a program loaded up at start-up. But I think in that case you'd need a CPU with matching Cores/ECores which... could be problematic. I have a 12650h in my laptop, and there just isn't a matching 14th gen processor. So likely trying to spoof would lead to bugs or a crash, or something. All speculation at this point because I have no clue how APO is working aside from 'make e-cores work better'.
@@thisisashan It is indeed all speculation at this point, but I meant that if APO operates at driver level, then it has to be signed and integrity-checked at OS start-up. And querying CPUID is at least a system call which you wouldn't want to mess with, or probably (since this is driver level anyway) it queries CPUID with assembly command executed directly on CPU. On the other hand some clever disassembly and probably some self-signed certificates used for bypassing integrity checks could resolve the issue 🙂That is exactly the part that might scare people away
@PK-lk5gs if it's driver level then it would be the mobo chipset drivers, which wouldn't make sense since the 14 series is on the same chipset as the 13s. Also, cpu do not have drivers. So it won't be driver level. Even if it queries from the cpu itself, you can still at the very least replace the call in a hex editor (or with a patch from someone like me that knows what he is doing) with a simple variable = whatuwant. So no need to reverse engineer. Nothing to do with drivers. Intel however would be upset with you, so if it "leaked" it better come out anon from some VPN from the Netherlands that routes through a VPN in China or some weird mess that they can't untangle.
This really makes me want to return my new 13900k that I just picked up, and go to AMD and give them a shot. If 14th "gen" is just a refresh, then there is no reason it shouldn't be supported by 13th gen at the very least. Even if it is only on two games right now, it's just pathetic. Intel, you need to be better.
looks like 1 E-core of each cluster of 4 is active - to maximise use of all available L2 cache (which is shared between 4 cores of a cluster). This should be possible to test replicate using affinity pinning.
Indeed. Also an implicit admission they are pretty much useless for gaming. Probably would have been better off swapping 4E cores for one real 1P core instead
Turning the software lock off for 12th and 13th Gen would have made them much more competitive against Zen4, which is in a very good place right now in terms of price/performance, and it would have been a much better look in terms of customer support. But ofc, greed comes first...
@@PAcifisti Also, we should be greatful they don't strip features after the buy. lol This is going to push so many people to AMD and Intel still needs to sell all those old CPUs that haven't been selling in years.
So the only way theu can push 14th gen is to give it exlusive software that actually makes it to work as it was intended in the begining... Clasical Intel.
14nm+++ sorry, are you implying intel (or amd) would limit performance enhancing features to new hardware for profit? What's hilarious is that tons of intel users are now combing the bios to see what their doing and how to apply it to more games and 12-13gen.
Makes me wonder if some clever people will figure out how to get APO working on 12th and 13th gen models on motherboards with APO-supporting BIOSes. If it can be shown to work the same on older hardware, they would really have no leg to stand on.
Lower end Big-Little like the i7-12700K doesn't have issues. Intel made the big mistake of reducing the percentage of p-cores for high end chips. The i9-13900K or 14900K are 33% P-cores vs 66% for the 12700K.
They sort of rub salt into it with their responses to Steve, completely boilerplate auto-responses that don't even address the questions he was asking. Reeks of "begone with ye, peasant!" But I think most communication with big corpos goes like that. You are important to us, etc.
I see alot of hating on Intel, because if AMD were doing this, they would be getting awarded for it just like they did with MANTLE which did not deliver the performance frames in games that it was hyped up to be. The double standards that exist in the PC ecosystem just completely blows my mind, and it needs to stop. Both companies have F'up and screwed the consumers and gamers over with shady business practices, and when they get exposed, they both should be getting called out and not one getting bashed and the other one getting a free pass
@@Ghastly10There’s a large difference between early adopter first-gen TR workstation HEDT processors, and Intel screwing regular normal consumers with consumer desktop processors. Not even in the same ballpark, Intel is worse.
If Intel doesn't provide this feature for previous gen, especially 13th gen since it's literally identical, I'm hard switching to an alternative when I do my next build. It's a big "up yours" to those of us who bought both 12th and 13th gen. Also, it seems counterproductive because it devalues their old stock that's still waiting to be sold.
For this reason we might only get to see more APO supported games when Intel finished to sell through their old stock. They want to milk their old stock for the highest price they can during this holiday season.
I don't see why they would care about old stock losing value though. I assume that most 12th and 13th gen parts have been sold to retailers, so now their value is no longer Intel's problem. They already sold the stock.
Don't so quickly forget AMD pulled this exact same shit with driver level FG and RDNA3. Intel is hardly alone in this fuckery. Although admittedly AMD's FG is garbage that no one wants, and this is something the multitude of 12th and 13th gen owners _will_ want. Where's the outrage from the AMD fanboys?
@@zodwraith5745 Uh, considering AMD brought FMF to RDNA2 before it was even out of beta (it still isn't, btw) and that AMD's initial statement about FMF carefully included a "for now" it does look like AMD learned the lesson from the AM4 shenanigans. AMD is still at a size where the community can muscle them, but in the case of FMF there wasn't really any uproar because mostly everyone expected the technology to compare like FSR1 to RSR.
As someone who already have purchased 12th gen it sucks that this won't be available for us early adopters. I am not sure any sort of pressure would work here as it appears that Intel is planning to use this as a selling point for 14 th gen
IDK guys. For the record I have a 7800x3d as I play VR racings sims, so I don't really have a dog in this fight. But you got the performance you wanted/expected when you bought your 12th/13th gens. If the company is going to continue development on their products, they need reimbursement for their work. If the point of the 14th gen is that now their E cores actually do something, then fair enough. They all do this. I have a feeling frame gen could be made to work on 2 and 3000 series RTX cards. AMD is apparently going to have it with no tensor cores at all or whatever the jazz is
@@hansolo631 Wouldnt really matter to me tbh. I have a 3080 now, and Im not impressed by DLSS & RT at al. But I think your CPU, the 7800X3D, is by far the best for €$400. Intel wont come close soon I think, but we'll see.
This is why I'm stoked that AMD's new hybrid architecture uses the same instruction set on both big and little cores. Surely that'll be easier to manage...
Intel's P-cores and E-cores have the same instruction sets as well minus AVX512 which is kinda pointless. I used AI stuff that has AVX512 acceleration on my 2021 i7-12700K batch, the problem: it performs way worse than keeping all 12 cores enabled without AVX512.
The small Zen Xc cores are still slower than the big ones, and the CPUs don't even have a hardware scheduler. They've already had issues with multiple CCDs, which are managed using their software. Hybrid architectures are just the worst thing for desktops. Completely pointless.
@@THU31 "hybrid architectures are the worst thing for desktops" No they are not. Steve from Gamer's Nexus showed perfomance per watt graphs for 10 years of i7s and the i7-12700K is by far the best.
@@THU31 The Ryzen 7 7xxx lineup with less cores and IPC than the i7-12700K get 20 degrees hotter. If you don't have Big-Little, more people would move to Apple Sillicon. Because the power draw and cooling for high end x86 is not practical.
@@saricubra2867 Ryzen 7000 CPUs are hotter because they have smaller chips and a thicker IHS for cooler compatibility. They use a lot less power, especially in the high end. As for the other comment, I have no idea what you're saying. That a 10 nm CPU has better performance per watt than all the CPUs before it? Wow! By the way, I consider AMD CPUs with two chiplets as a hybrid architecture, because you have to manage thread allocation for optimal performance, and that's bad.
Bought a 13th gen this year may. I think this sealed it for me, that I am saying goodbye to Intel. I am still having issues with the e-cores in selected titles and also with some other programs. Seeing this now, that they worked on it but won't make it work for the 12th and 13th gen, actually has given me a reason now to say GG.
This is why I went with AM5 for my gaming rig. I didn't want to run into any compatibility issues. Especially with older games, emulation and such. I just got a 7700 but I'm looking forward to the 8800X3d as I hear that generation will kick bottom.
Something that i noticed too is that they use only 1 E-Core per E-Core cluster, so that E-Core has access to the full L2 of that cluster and doesn’t produce contention on the cluster L3 interface. Very smart
I feel like while this strategy does certainly improve performance, the method seems to substantially undercut the value of having *sixteen separate E-Cores*. When you have to disable 75% of the E-Cores to pile E-cluster resources on the remaining ones to achieve the expected performance, isn't that a bit like... just having 4 more P-Cores executing from a big L2 at reduced clocks? Seems to me like they could ignore the 8P+16E concept altogether. Just start with a 12P chip, and add a special P-state for every 2nd core called 'lock to
@@asm_nopE-cores help in tasks that multithread well, games just don't have anything to schedule on 16 small cores. It's normal, games ain't end-all target load.
I still have a core 2 quad here, and for some cases I do pretty much the same and run different heavy tasks on the two dies. Avoid them fighting over the cache.
Wow, now you need an app to make the CPU work properly. So only Windows users running a 14th gen can benefit from this incredible feature. I hope this will be a pay feature in the next gen...maybe we can have an Intel app-store and you can unlock 10% more perf for 69.99$, 20% for 109.99 and 30% for 199.99.
It doesn't make any sense to lock it to 14th gen. Noone buying a NEW platform is going to buy older hardware, and anyone already on, say, a 13900K isn't going to upgrade. So on the first customer they dont lose a sale, and on the second variety they dont lose a sale either. In fact, this might sour people's experience with the brand more and place them more favorably with competition. It's amazing to me how these tech PR types get handed a W by their amazing engineering team, and turn it into an L through sheer greed, without a fault.
Technically on the first customer, they could lose it, if the prices differ enough. I mean, without APO, 14th gen is basically 13th gen, especially for 14900K. If 13900K is $50 less, why would you get 14900K ?
It makes perfect sense. It's Intel. Where have you been the past few decades? Intel has been anti-consumer as long as possible, do you not know about being forced to upgrade to a new socket every year or upgrade? Miserable.
@@thealien_ali3382 Yes, because NVIDIA decided to do that. The question is, could they have avoided that if they wanted to, or did they just have maximum profits and disregard for consumers in mind?
@@Koozwadnot defending Nvidia, but if you don't know all the specifics of the faster optical flow accelerator in Ada Lovelace you can't really say older cards can handle FG well
Buy 14th Gen find out??? IMC WORSE than 13th? Hahaha E-cores a joke, efficiency a joke, price a joke, ILM a joke. They need to feature 64 e-cores next gen with a refreshed silicon P-cores. The Ultimate joke
If intel fixes the E-cores but doesn't provide the fix for 12th and 13th gen that defeats the purpose of LGA 1700 completely imo wtf intel I guess I'll be moving to Zen 5 when it comes out
A comparison vs setting affinity would be nice, as that would set the game to use the P-cores, while the E-cores would be used for all other tasks on the system. Either manually or auto with Process Lasso or similar.
Also manually clocking e-cores to fixed higher frequency. Naturally this will increase the power draw but testing purposes it would be interesting to see if e-core OC + affinity could achieve similar results.
I've seen people do basic testing of manually controlling affinity and comparing it to APO, and it doesn't seem like it comes close to APO. Steve actually says something similar himself right at the end of the video, that disabling the e-cores entirely doesn't seem to work as well.
Intel probably does what you can't easily replicate -- assigns exact game threads to specific CPUs, not just whole process. That's why it works only with limited set of games -- manual tuning of what threads need to go where is needed.
@@labombaromba disabling the E-cores would prevent the system from using them for other background tasks, so yeah. Setting affinity would allow them to be used for other things. Disabling them vs preventing the game from using them even though they are active isn't the same thing.
@@niter43 sure, but testing something like PL's Efficiency Mode function would be interesting. I would test it myself but I don't have a Windows PC anymore, or a 12-14. gen Intel. :/
Intel really failed with their bet on e-cores. Intel atom cores are just trash, no buts, no ifs AMD has the right idea, their C-cores have the same instructions and latency as the big cores, just with less cache and lower clockspeed while using 50% of the space (as opposed to 1/4th the space e cores use) The upcoming 24 core ryzen 9 in with 8 big vcache cores and 16 smaller c-cores will be the no compromise cpu of the future.
They quietly added support for APO 12th and 13th gen Alder Lake/Raptor Lake if I understand correctly, but it's semi-unsupported if I'm reading their materials correctly now that they've been updated. But at least they let you try it, I think? Oh, and they're calling it "Intel® Application Optimization" now, and they say "Intel Application Optimization may appear in some third-party documentation as APO." Who knows why they re-named it.
@@RiceNoodlestw I think they're caught between wanting to expand it (due to criticism of it being totally artificially locked to Raptor Lake Refresh for no good reason), wanting to under-promise so people don't somehow construe it as something they thought they were paying for when they bought a CPU if they can't super heavily validate it on their end first (Fear of legal liability? Or simply fear of bad brand image juju if it sounds like it's supported and should work, but doesn't end up reliably working for everyone on all CPUs?), and wanting to get engineering excellence by hand-tuning it for each processor, vs "hey F it we can enable it and call it unsupported, our work here is done". Like, it's a weird spot they're in because of how they're communicating about it as much as anything, and I think they see this as a neat side thing some engineers can do to wow some people without doing any harm or anything. But since it takes some degree of hand-tuning and working with game devs, it's not really scalable and it can't be a core part of the promise or sales pitch for the products, I guess. Something like that, keep in mind I'm just speculating but that's what I'd infer personally. Reading between the lines and reading the context. I feel like the best out come for intel here would be to figure out the essence of what makes a good "Intel Application Optimization" implementation in a game work well, write up some docs as "best practices" for game devs or other app authors, and/or get Microsoft to add to Windows (or DirectX?) whatever APIs app authors might need to schedule their work properly. Such as to get bulk processing or lightweight async work on the E-cores to unblock main thread stuff to go faster on P-cores, or whatever it is they're doing in APO right now to get those FPS boosts.
When I built my new pc, I switched from intel to amd specifically because amd has shown more willingness to support their platform / socket for the long term. I had an lga 1511 board, but cpu compatibility was artificially limited by intel. It was shown that older boards could be modified to work with the late model cpus - even though intel said it couldn't be done lol. Intel could do this back in the day because amd couldn't offer meaningful competition with their am3 platform; but everything changed when the fire nation attacked... um am4 came out... Now both companies compete on performance & longevity , though only one company takes longevity seriously.
The os and game should already optimise for p and e cores. Is APO technically possible on 1700 and previous generation mobos or do they need a hardware implementation to make it work?
Seems likely that it's software only, as APO was included in a bios update, and there aren't significant changes between 13th and 14th gen chips. On the other hand, hacking together something that requires a custom bios is probably a bit difficult to do. Not my area.
Yeah I think its basically forcing all backgorund interupts and small requests onto the E-cores. Where default is core0 for so many tiny windows tasks. I think there was a video I wachted a while ago that was a whole long process on how to do this. It was something to do with removing from P cores from the list of available threads it can use and then using process lasso to then manually asign your game to the now completely idle P cores. Something along those lines. I think this would be a really interesting thing for tech tubers to dive into and reverse engineer, since intel doesn't care about us.
Plausible idea. I wonder why all the p-cores stay at max frequency all the time while e-cores' frequencies fluctuate significantly? I don't really expect either of the games to load all 8 p-cores fully. Why do you think is that?
Very true. But I expect AMD to do the same now they outclass Intel so heavily. They are just companies. They are only customer friendly if there is competition.
@@AquilaeYT Check MOBO manufacturer web-sight for BIOS update which has APO support for 12 and 13 now. I tested it in RDR2 and Metro Exodus and it performs worse anyway...
This is going to be one of those features that works better the less you need it. 1080p gaming on a top of the line system is a competitive benchmarking scenario, not an actual gaming scenario.
I've been saying this for years. I understand them wanting to maximize singling out one component for testing, but in reality very few 4090 owners are playing at 1080p, and no one shopping for a $200 GPU is pairing it with a 7800 X3D/13900k. This falsely convinces the consumer that you need far more expensive hardware than reality. Rule number one in building PCs is to build _balanced_ systems so you're getting all the performance you paid for. Far too often I see one product being praised over another even though it's much more expensive because it performs well in unrealistic scenarios.
If support is added to more games, it's going to make a difference in the long term. Think like 6+ years from now, but by then, the people buying E CPU cores will have likely already moved onto newer generations (assuming Intel makes even modest improvements between CPU generations).
@@zodwraith5745 An argument for pairing a 200$ GPU with a 7800X3D/13900k is a CPU bound productivity focused build. Sure the 200$ GPU might not be able to take advantage of said CPU, but if working and playing on the same PC I'd much rather not spend another ~300$ (not to mention extra accessories) on top of what I paid for the initial 7800x3d/13900k. Granted such scenarios are a niche, but a system lopsided towards the CPU can make a lot of sense unlike a graphically lopsided one
@@pyro226 show me gains in 1440p and 4K where averages and 1% lows are good but not great. These are useful gain to gamers. +30% at 1080p when you already get 400 FPS is just for benchmarking scores.
That will change when you slap a 5090 or 6090 into the system though, putting the onus back on the CPU to generate scene data fast enough. Today's 1080p is yester-year's 640 x 480, and tomorrow's 4k. So I'd say it still matters.
So will Intel also cut application optimization support for 14th Gen when 15th Gen releases? Also the feature should be implemented at a framework/api/compiler level so that game developers can add the support.
I don't need it for my i7-12700K. 8 perfomance cores and 4 efficency. It's more of a streaming beast for a game, because for vainilla gaming stuff it's overkill (very stable frametimes, specially for the most CPU intensive console emulators).
APO whiffs a little of their "Intel on Demand" idea. Like Unity, there may be one or more board members somewhere in Intel that has been clinging onto this notion for years (decades?) waiting to re-release it with a different spin.
Intel have always segmented their CPUs with special paid for features on few models. In a year they'll bring out a new architecture and people can forget APO support for 14th.