InfinitePCGaming that's true. guess what, no one will play at 1080p on low settings with good graphics card and expensive CPU. the test should be done with higher setting and/or higher resolution and we should look at % use of the cpu. thats a good indicator.
they dont have the test it different ,all they have to do is point out those facts ryzen will only get better but 7700k will get passed on by intel/amd 6+ core platform pretty fast.
Dude you are a fucking wizard when it comes to piecing together huge amounts of information over years and years. Excellent work. Its no wonder that other reviewers cant keep up with you - you are using a lot of strategic thinking to make your predictions that others struggle with. You have to be on the spectrum.
Exactly, he sticks to practical knowledge while most reviewers stick to their bank account and benefit packages. I especially loved his messages when dealing with AMD tardiness, proper Scotsman!
I don't know why bt I m feeling very happy after watching this video although I never used a single product from AMD,bt I still want AMD to win this time and will support them in future... PS:currently I don't have enough to build a new system bt I will definitely will build ryzen+vega system in future...cheers👍
Ok, wow. Well this DEFINITELY explains why my FX-8350 still games perfectly well. You know there's another funny thing that most of the younger people here won't know so believe it or not, this has happened before. Back in 2007, the last time that AMD released something that was competitive to Intel, the press (for some reason) did something I considered to be downright criminal at the time. The CPU that was released was the Phenom II X4 940 which was meant to compete with the Intel Core2Quad Q9400 and the Phenom II X4 920 which was supposed to compete with the Intel Core2Quad Q8400. However, the stupid press decided that instead of comparing them to their natural Intel counterparts, they were going to compare them to the brand-new Nehalem Core i7-920 and i7-940 CPUs. This was despite the fact that the i7 was literally TRIPLE the price. The reason? I'm guessing that it was Intel's doing because the PII 940 compared very well to the Q9400 despite costing almost $200CAD less. IIRC, the Q9400 was $500CAD and the PII 940 was $325CAD! Well, four years later the Core2Quads were completely obsolete but the Phenom II series were still considered a great value as a gaming CPU because they were fast enough to "keep out of a fast video card's way" with the Deneb (stars) microarchitecture getting refreshes and die shrinks which showed just how superior the Phenom II X4 was to the Core2Quad. The Phenom II line ended with the X4 995 and X6 1100T (Thuban) after having existed in those states for four years. I still have my original Phenom II X4 940 running in my living room computer with 8GB of OCZ Reaper DDR2. I'm actually typing on my craptop which uses the original AMD APU, the Llano, which uses a quad-core "Stars" architecture with a Mobility Radeon HD 6620G on-die. So, for some reason, that AMD microarchitecture aged well and coincidentally, it was the last competitive uArch that AMD had before Ryzen. AMD looks long down the road when it designs something. Sure, it looked a bit "too far" down the road when it came up with bulldozer but as I said, my FX-8350 still runs windows and all my applications at what feels like lightning speed and I haven't even felt the need to overclock it yet! I only use overclocking as a way to extend a CPU's life, if I want a faster CPU, I buy a faster CPU in the first place. I'm ready with liquid cooling installed and since I know that the FX-8000 series can overclock to the moon, I just might be able to coax one or two more years out of the old girl! Here's to hoping! This is the best investigative tech reporting I've seen since Charlie Demerjian exposed nVidia's practices all those years ago. You have hit upon something that EVERYBODY ELSE in the tech world completely missed! I'm a subscriber for life brother!
I could not consider a fx amd chip as runs only slightly faster then my i7 920 (be a pointless upgrade to something that is only little faster then what I own) but RYZEN has got there IPC To within 10-20% of a i7 7th gen cpu (7700k) i do have single threaded games that would benefit from having a 7700k over a 1700@4ghz as they get very cpu heavy towards end game but going from a i7-920 (just over 1000 on cpu-z) to RYZEN 1700 it is effective twice as fast at doing single threaded stuff (just over 2000 on cpu-z and over 20k on muti threaded) as it has 8 real cores this time and smt, but also use my system for other stuff where 4 cores is really not enough (but probably is ok on a 6700k or 7700k) but 8 cores and within 20% performance of a i7 (single core) seems well worth it (once BIOS updates have come to fix issues) likely considering the 1700 in next months
I have a similar situation. In some games my Ivy i5 is limiting and at forst I thought about an i7-7700K At 5 GHz it would give me around 40% better performance in said situations, even when no more threads are used. Now I'm not that sure anymore. Most likely my best option would be to keep waiting a bit until either Intel or AMD throw out a chip with atleast 6 cores, each with the performance of Kaby @ ~5 GHz
I know, it's great eh? When I bought my Phenom II X4 940 it was essentially a Core2Quad Q9400 with a $150 discount on the CPU and a $100 discount on the motherboard. I couldn't have been happier.
I hate You Adored. 15min ago there was still hope for my hard earned money to be save on bank account. I could use it for nice vacation with kid, or something for my wife.... Now there is no more hope.... Just clicked purchase on ryzen 1700, asus prime x370-pro and 2x8gb of corasiar vengance ;)
I have a Strix 970, be interesting if you would run some tests on it with the R7 1700 as that is pretty much the combo I am looking at, except probably asrock x370 in place of Asus, as I would like USB C connector for my phone ! Hey @ ADOREDTV.... any chance you can run R7 1700/GTX 970 tests when you get your parts?
Steve @ Gamers Nexus and the rest that followed him shown to be WRONG. Funny how all those major reviewers jumped on the same message all focused 720/1080p benches just to prove that the 7600k is faster.
Need the much, much more relevant Minimum FPS and Frame Time numbers instead of just average FPS. I got personally fucked over this way in the past with the G3258, where it was fine and FRAPS would show a nice, solid 60FPS. Until you tried to move, and everything became a jarring, stuttery mess.
Exactly people are saying r7 1700 is so cheap still $329 not including fan or motherboard, and amd 1800 which is 500$. Don't be dumb and buy r7 1800x unless you are a heavy user which seems to be everyone today.
+6utS Surprise surprise, that Pentium has absolutely no headroom for anything. Neither does the 4 core 7700k while the Ryzen 8 Core can just do whatever you want such as streaming or encoding while playing. Majority still fail to see the value, just like Gamers Nexus since the majority of encoding cannot always be done only on GPU, only instances.
Why doesn't someone ask Steve about the 1% lows? I mean when he tests NVIDIA vs AMD and any time it's close or AMD is even better he claims victory over AMD in favor of NVIDIA based on 1% lows. Well in Steve's own Ryzen videos you can see Ryzen blowing away i5's and even i7's in the 1% lows lots-o-times, so why did -Adonis- Steve not mention or discuss this whatsoever, and even more importantly why do you suppose if the CPU is doing so well in that 1% area, but not getting higher averages, what do you think this indicates? I would ask him but every time I do he says I'm insinuating he's biased and therefore anything I ask or point out to him using his own data I'm ignored. And while you're at it ask him for his definition of death threat, because someone e-mailing him and saying they hope he gets cancer may not equate to death threat in many peoples opinions. And when he reported said alleged death threat, did the authorities advise him to report the occurrence on Twitter, or did he make that decision on his own? If you can see a pattern in the decisions Steve makes, you can easily see how that might be a factor in his supposed area of expertise.
Wendell from Level1Techs said on Tech City's "The Tech Lounge" that AMD's SMT is far superior to Intels HT on Linux after they updated their kernel to properly do cache/core managment. Something we'll see on Windows soon too. Add this on top of software optimizations and microcode optimizations from MB vendors and there's only one way for Ryzen to go.
Yeah AMD's SMT scales better than Intels HT. In cases where there is an actual benefit from more threads. In other situation, like most games, it is best to disable either of them. Seems to me like back in 2009 :D Ofcourse people will complain about "Why getting an 8 core 16 thread chip, when you disable half of the threads." The answer is easy: profiles! There isn't much trouble in making 2 profiles, one with SMT enabled, the other with it disabled. Unless someone is that jumpty that they can't execute one of the use cases (gaming, content creation) for atleast a while, there should be no issue restarting the machine between tasks. In the end it is already suggested to take some breaks on long sessions anyway.
eXawN great choice👍 although 1700 might be slightly slower in games when compared against 7700k..bt itz 4 times faster than 7700k in productivity at $20 less.. plus u get that sweet wraith cooler for free as well:-)
He says 8350 became better than 2500k in 4.5 years, and now you buy ryzen because is gonna be better than 7700k in 4 years, why don't buy ryzen in 2020-2011 when is gonna be better than current 7700k ?
Best video I've seen in a long time if not ever. PLEASE UNDERSTAND THAT THIS INDUSTRY NEEDS MORE PEOPLE LIKE YOU!!!!!!! continue with your hard work and absolute detail regarding your videos. The more detail you put in the better they are!! no-one else in the tech world comes even close to you. It seems you have a talent/gift for this channel and thus you will have my full support. DONT STOP.... EVER!!!! chears, go out and spoil yourself a bit you deserved it.
Agreed. I saw guy do low res gaming and the i7 was at 90% load and I was like hum while the 1700 was doing 25%. I was like that intel is really maxing out for no reason and realized this. Though I dont have the addience on youtube to make a video like this that would get good views.
I've been all over youtube saying this the past couple two days. I knew what was coming when I read your comment on Hardware Unboxed video, that Steve wasn't gonna like it. Thank you for this video.
Well I think Steve at HWU is a cool bloke and does great benches, I just think him and the rest have been getting this particular part wrong for years. It's understandable as it appears sensible, but nobody took into consideration AMD's attempts at changing the industry to multi-cores. ;)
People that defend 4 cores think that games will use max 4 cores for another 5+ yrs. I think thats where they are wrong. Now that AMD have competitive chip where 4c/4t is the lowest one, 6/12 will cost as much as i5 and 8/16 as much as 4/8 i7, the marked will change fast and with that so will game engines too. Consoles also help a lot in this case and DX12/Vulcan too. The only reason why past few years 2/4 and 4/4 was enough for gaming its becouse intel domination. But we all can see how fast things move on in past 1-2 yrs. Even now you can see 7700k stuttering in some games with 100% CPU usage.
They are all good people who work hard and do great work no question. But their final recommendations to gamers and conclusions are very shortsighted. Very happy about this video.
Steve review is fair, unlike GN, which is .... well ........, the way he disregard Ryzen in term of productivity is laughable, GPU acceleration?, come on, why the hell Intel HEDT exist if that is the case?. and also with all this bug, optimization thing, it could still perform around if not better than i5 in gaming. And the platform cost is ridiculously lower compared to Intel HEDT, if he still complained about the price, just wait for Ryzen 5 & 3, which he didn't even mentioned. and also 4 core for high end gaming is this era?, **** ****, Steve from HWU and also Jayz2Cent confirmed that Ryzen perform silky smooth no stutter in gaming, where i7 7700k got stutter sometimes. AFAIK, stutter simply means "MORE CORES!", quad core has gone the way of dual core.
Ok, so besides the good old "It'll pay off in the future" stuff, let's talk specifics, did anyone run tests to show the difference in performance when performance was set to max instead of balanced? The general advice in building a PC is "Build what you need now" and not "Build what you might need in a few years". In that sense, Ryzen being slower is simply that: right now, it's slower. You can talk about multithreading game engines all you want, but it's still more easily said than done. IMO Ryzen is a great CPU, both in terms of performance for youtubers/streamers/multitaskers/etc and in bringing back balance in competitiveness, and while I'm willing to look ahead and give AMD the chance to bring up ACTUAL gaming performance it matters very little that right now: it's not faster. Addition: How many threads does battlefield scale to? You're 100% right in saying that the 7700k was at its limit, but I'm skeptical about whether or not the Ryzen would continue to scale up with more GPU power. :/
It depends where you are coming from. The Ryzen 1800X is much faster than my old Xeon 2697 V2 12-Core/24-Thread CPU which is four years old and most likely is still gonna repay my entire AM4-Platform. I don't even know why a "Gamer" ever thought that he might be the target of a 16-Thread-CPU. It is ALSO good at gaming, but of course it is not its entire primary purpose. The products for 720/1080p-Hardcore-Gamers are still yet to come. And just speaking for myself: I haven't discovered a single workload yet that's handled slower than before nor ran in any stability issues (aside from my OC-experiments). In Games the Min-FPS are drastically better. And to be honest: I didn't expect such a flawless experience from day one from a new platform. There are no crashes or stability issues today (at least for me - guess I'm just lucky once) so if nothing "improves" it will still be better than what I had before.
The way I basically see it, all these CPUs will be the same at 1440p and 4K - it's just teething issues holding Ryzen back. I expect it to be faster at 1080p in multithreaded games of course. The worry for me in buying a 7700K now would be that with AMD driving the industry to use 16 threads, it could get left in the dust. Overall I think the 1700 is a better choice, though I'd strongly suggest waiting on the 6-core 1600X. I'm unsure what has been enabled/disabled etc from other reviews, but it's a bit of a mess so far.
The "Build what you need now" mentality works well with GPUs, but not as well for CPUs, since people upgrade the CPU now FAR less frequently. Ex: Everyone still running 2500/2600Ks or 8350s. Have you ever found it odd that they still show those numbers in performance comparisons? I personally would find it a unique experience to see that acual processor models are popular by the steam hardware survey, not just to compare the actual popularity of the intel K-skew CPUs to more budget friendly parts, but also to see the average age of processors used. I know of people who are still using phenom X6 chips. And, to be honest, if someone is looking to upgrade, PURELY FROM A FEATURE STANDPOINT, the Ryzen lineup makes more sense. If someone has a 2600K or even a 3770K, and they look at the current intel offerings, they're looking at something that's maybe 20% faster than their current CPU (discounting memory advancements with DDR4 and NVMe), but has the same number of cores and threads, a few new instruction sets that they might or might not use, and that's it. Whereas Ryzen has DOUBLE the number of physical cores and threads, along with a similar per core instruction per second gain to the intel chips. The features Ryzen provides, while not as important for purely gaming, are very compelling for anyone who just wants to upgrade to a new processor.
I was going to switch from my I7 4770k to a R7 1800x but after the first reviews changed my mind and was looking at the I7 7700k but after this chaps common sense review I'm going to 1800x thanks :)
Go with 1700, makes no difference if you overclock. 1700 can go to 4 ghz same as 1800X. Just try to get without cooler to save even more money and then get decent watercooler and OC that chip.
I have not been here since the beginning. I've been here since about 3PM local time, or 45 minutes ago. You mentioned a Techpowerup article that almost brought a tear to your eye. This is the first of your videos I've seen and you sir, actually did bring a tear to my eye with this, Scotsman. This analysis of history, the press and the CPU itself is so well done I need a box of damn tissues. You have cut through much irrelevant rubbish and made your points in an organized and logical fashion. In all the years I've never seen anything like it. It's bulletproof. I expect you go around on horseback with a lance and a suit of armour. First video, but far from the last.
Agree. I did as well. I still really like GN overall but the Ryzen coverage was either slanted or poorly scripted. Like too many outlets they overstate the Ryzen game performance and let that be the singular defining metric (raw and price performance ration) and largely disregard it's actual stregths
Wendell reckons that the scheduler for windows 10 is fucking up with the CCXes because they both have seperate caches, so its shifting threads back and forth between the 2 compute cores and generating fuck tons of cache misses. A thing to keep in mind is the new 240hz monitors, I'll be picking one up for CSGO and other shooters so low settings is gonna be probably across the board on my 1070 to keep up to that monitor so personally these low resolution benchmarks are very useful.
Absolutely bro. For CSGO the higher the fps the lower the input latency, so do what you gotta do. Ryzen 7 was never aimed at gamers. R5 should be their gaming lineup.
Wendell also reckoned that they might be able to crank the frequency up for the R5 CPUs and if they keep both CCXes on the chip but just disable 2 cores per complex then they will have fuck tons of cache to work with and that will negate any remaining IPC deficit along with higher freq to match the 7700ks. Of course its entirely speculation on his part, he was talking about this on Tech City's podcast last night, but if they do go this route it will be very interesting. Cache size on Intels consumer chips is fairly low at the moment.
I knew Adoredtv would bring some sanity to all this Ryzen review mess, awesome investigative work there. This is why your one of my favorite tech journalist.
Awayatt is just mad that he didn't hold out for Ryzen and is trying to justify not getting it. The reality is that after a few bios and Windows tweaks to help them actually comply with Ryzens new arch, then Ryzen is gonna stomp.
Not too long ago... "You idiot! You don't need a powerful processor to game!" Today... "Waaaaaaaaaah the Ryzen's not a powerful gaming processor!!!" :-P Go figure. :-P LOL. :-P Maybe when folks get over their hypocrisy they'll give Ryzen the fighting chance it deserves. :o)
Now that is actually not that hypocritical as it sounds at first. With the release of the 8-core consoles games got within a short time dramatically more CPU demanding and especially threat optimized, games using up to 6-cores seems for example for ubisoft not unusual and 2-core systems became obsolete within a rather short time. So we did see a sudden, massive increase in CPU resources needed for gaming.
I'm looking to test superscaling vs. 1080 max AA on my setup for CPU utilization and power consumption to do some more digging on this. Funny though that my original comment was about PC and the contrarian responses swing over to consoles. :o)
Consoles are a PC in a way that one could desperately stretch to argue with a PC guy. :-D Let's revisit this issue when Phil Spencer finally comes up with an XBox running Windows 10. :-D Actually with a 1060 I'm finding the resource usage to be the same between 1080p Ultra and 15-something-p with the antialiasing turned off in terms of maintaining 60 fps in The Crew. I'm using Nvidia's softening filter during the downscaling for "antialiasing." Might turn it down to make things less fuzzy. :-)
Another consideration would be that reviewer's conclusions are often paraphrased and taken out of context. I think plenty of reviews showed Ryzen's exceptional performance with workstation tasks, encoding, and synthetics, but the CPU-bound gaming performance was anomalous, so it stood out more. I don't think I or many others said "Ryzen sucks for gaming", just that it demonstrably falls behind Kaby Lake, and would hold back some performance if paired with a high end GPU like a GTX 1080. If I'm trying to provide sound advice to someone who wants to build a PC solely for gaming, I simply wouldn't recommend a Ryzen 7 right now. That said, I'd happily recommend it for gaming+streaming or a whole range of more intense workstation tasks, and I do sincerely hope that improvements to the AM4 EFIs and Windows 10 optimizations etc. bring the gaming performance more in line with everything else.
Paul's Hardware Absolutely this. To suggest that 'the entire tech press is getting it badly badly wrong', (about the reason for benchmarking at certain resolutions) whilst misrepresenting the reasons for doing so, is a bit harsh. Running at certain resolutions to take the GPU bottleneck out of the equation (standard practice for benchmarking CPU's) is not the same as trying to hide how well its scaling potential is for the future, or using it to see how well it will scale in the future with faster graphics cards (the reason you seem to suggest it is used for) There's a lot of ifs and future buts to take into consideration there, whereas you guys have to review it on how it performs in the here and now, for people wishing to make a purchase whilst your reviews are still within a relevant timescale. Im assuming most will revisit some testing when there has been some optimisation done.
except most people with 1080s aren't on 1080p making the entire thing a moot point. the actual real world gaming experience is identical (in 99% of cases) to a 7700k combined with ryzens steller productivity results, equalling a 6900k overall for half the money. frankly unless someone is a competitive 144htz gamer that uses extremely high end gpus and low resolution/low details on purpose, i would find it hard NOT to recommend the ryzen. a 1700 @ 4ghz @ $325 is a great buy.
@Paul's Hardware - I have just one question : Who buys a 500$ cpu and pairs it with a 7-800$ gpu to play games in 1080p? I mean you buy a beastly cpu and a beastly gpu that's made for 4k gaming and you play in 1080? I hope you understand the stupidity of these tests because they were performed with components made for at least 1440p gaming in mind. People say that synthetic benchmarks are not valid since they don't reflect real life usage and I ask you play games in 1080p with a gtx 1080 or titan xp? I understand the thing with elimineating gpu bottleneck but it's crap since it doesn't reflect real life usage , I wonder why no one tested the games with a 1080p specifc card like rx 480 or gtx 1060?
Well those screens only came on the market this year but they'll get a lot of adoption in the competitive scenes for Overwatch and CSGO so I wouldn't write them off as totally niche yet. A lot of people playing those games at the moment!
Yup, and for that niche that want to game on 240Hz monitors today with frames rates that near that 240Hz limit. Than today surely Ryzen is the better choice. But also quite a few of the games themselves aren't as their game engines cap the frame rate at 200 per second.
If you are purchasing that sort of monitor you are in a very VERY small minority and this is not something that should be widely reported or in the title of videos or articles. Also if you purchased said monitor you need a very specific and specialized setup and you should be well aware of this. There are far FAR more people just using their CPU for the workstation tasks Ryzen is amazing at.
I honestly think that you should get your own review site, like wccftech, pcworld, etc. a lot of people who are thinking about building a pc use things more along the lines of articles and such. Something like what gamersnexus does, where basically all of their videos and reviews are also in article form. The reason I suggest it, is because you give such a better, more calculated, and more reasonable view to the chaos that is the PC market. As always, great video. Really helped cement me into getting a ryzen desktop, since my pc will be used for gaming, CAD design, and multimedia, and as you showed me in this video, only ryzen does all three of those tasks amazingly, even without software optimization.
Guys, compare 7700K with 6900K, you'll get similar single thread performance difference. I don't understand what's all this fuss about lol AdoredTV earned my respect, even with only 39000 subscribers, you remain in my top 10 bookmarks. Keep it up Bro!
Saying that 1080p is "low resolution gaming" is a little ridiculous when steam hardware survey clearly shows that 1080p is BY FAR the most common resolution and 1440 and 4k are niche by comparison, and the idea that future performance matters more than how it performs today is also insane, if you want to spend your money on something that will probably be good in a few years go buy AMD stock.
I have a 720p and a 900p monitor in my house i'm just telling facts boy relax, make a good research, see some statistics and you will find your best answer, don't just base it in your own experience
You should take into account people 720p and 480p benchmarks were made, also 1080p low settings. Is like having 1 inch of distance between X and Y and zooming in until it appears that X and Y are on a different galaxy.
Jared, you can also see that 970, 960 and 750 ti are the most used gpus according to steam hardware survey. Most of these benchmarks were done using GTX 1080.
JESUS FUCKING CHRIST.... How could all the other tech tubers just completely look over this, Totally pisses me off because they damaged Ryzen's reputation badly....
Nah...real gamers simply said the truth. Ryzen is like religion..Sure there was a big ball in the sky that looked like god...but it turned out to be North Koreas missile...maybe the next one will be real.
Because a lot of tech reviewers don't actually know what they're talking about. They read stuff off of a spec sheet and know a bit about the hardware. They were more interested in getting views and clicks by getting their reviews out the fastest instead of really looking at numbers. And when you call them out that their testing wasn't thorough or that they didn't see something, they get pissy. Most are childish. Doesn't help that so many are comparing it to the 7700k as its main competitor, and only mentioning the 6900k in passing a lot of the time, despite R7 competing with Broadwell-E mainly. But that's also because they don't know too much about workloads that use high cores. It's always about gaming for many of these people, and the occasional "content creation" or video editing, like if so many do that. No talk about other workloads like virtualization or scientific computing.
they didnt damage much... if ryzen gets better soon then they will loose face,but if not they where "right" so if anything they are risking some credibility.
You're telling me that comparing the R7 with an i5 and not explaining why the software needs to be optimised, isn't damaging? It's not like those bastards are just gonna remove their videos...
I can understand why they are showing the benches they are. Inducing a CPU bottleneck is a good thing to show the viewers. But, you're right, these aren't CPU bottlenecks being shown. The CPUs are not running anywhere near 100%. I also agree that those in the tech media have an obligation to try and educate their viewers, rather than just engaging in the sort of journalism that looks for the negative or controversial angle and makes that the focus of the article to try and get attention. It's true, I guess, that many of these sites and channels are just playing to their audience, who are mainly gamers only interested in gaming. But, I believe going beyond that and seeking to educate the viewers and broaden their horizons is the way to create long term goodwill and a loyal audience over time. Overall, the reviews on a lot of my favourite channels have been disappointingly shallow. Maybe I expected too much. The upside of all this is that, feeling unsatisfied, I've gone looking for other voices and I've found some really good reviews on channels I'd never looked at before (OC3D has a great review, if you can invest the time). So, by forcing me to look elsewhere, they have, ironically, educated me and broadened my horizons. Just not in a way that benefits them.
Gamers Nexus did a very good job doing the testing and his numbers are quite accurate HOWEVER what went wrong with his review was the delivery. He continually went to discredit the gaming performance comparing it to i5 CPUs while it was not the CPU's fault at all. It was just the platform, the OS, and the games that were at fault, and it will get better as time goes on. Also, he never acknowledged the impressive compute performance of Ryzen given its value and never give it the credit it deserved. This should have been the proper highlight or the conclusion to the review. Again, Adored thank you so much for shedding some light into this shit storm of a situation. Ryzen is an absolute beast and I gotta really hand it to AMD for pulling this off with such limited R&D and manpower compared to Intel.
well am not trying to argue but if he is "gamer" Nexus then he should not review such a cpu and wait for the ones that intel will release for "gamers" 4 cores and dual ones, and trying to negate any success the RYZEN CPU has done which giving the consumers a better performance than 1000$ CPU is not fair at all, which by the way is the main competitor. AMD has created a new bracket for consumers that have not existed before thanks to intel multi thread work station 1000$ gaming 300$ now you have an option to go for the 1000$ for 500$ does not need a genius to figure it out.
+Neezy Ko Sorry but GamersNexus on their own said lowres benches with GPU bottleneck removed are for getting futureproof capability of CPUs. So it is not about now, it is for future by their own words. About GAMERS nexus ... as far as I see they didn't stop testing at games [6 games, 3x30sec - very gamerish for gaming channel]. They continued to productivity concluding it is i7 and than sudenly remembered, specially fo Ryzen review, you don't need cores/threads that much because now you have GPUs to work instead. Ignoring fact that many, really many productivity exclusively depends on CPUs because GPUs are still not fitting to do things with needed precision. I simplified but check some articles about it. There is a reason why CPUs are still irreplaceable. So nothing stopped GN to "advice" people beside gaming part. By the way, gaming which is not bad. Quite opposite.
+beardninja Gamers Nexus was very thorough, I dunno what you're talking about. The name of the first article is "...1800X: An i5 in Gaming, i7 in Production." He did multiple production benchmarks showing 1800X topping the charts. He acknowledged future optimizations, but gave a counter-argument: _"AMD defends its position by indicating the ISVs need to begin supporting their product, and has provided us statements from StarDock and Bethesda relating to this. To these statements, we’d remind folks that games take a long time to develop. Buying a CPU now in the hopes that games will better leverage 16T CPUs in a few years is a risky move...Regardless, we’ll provide the quotes that AMD passed along."_ And from the conclusion of his 1700 review: _"If you’re doing zero production, you’re not doing any content creation, then you’re still generally getting a better deal with an i5 or i7 CPU. For folks who are combining content creation (similar to what we do) with gaming, or may be considering streaming, the R7 1700 is actually a viable chip - and far more so than the 1800X...Rendering workloads are far boosted over equivalently priced Intel CPUs. We can stand behind the R7 1700 under the right usage conditions - just figure out if you fit into those conditions."_
But seriously, is there anybody will play games at 720p Low Detail, yet they bought Ryzen R7 1800X pair with GTX 1080 ? of course not, people who bought those kind of PC play at 1440p or even 4K on Ultra detail, and that's where ryzen really shines, 7700K use up to 95% of all resources at 4K gaming meanwhile Ryzen 1800X are just relaxing even on 4K gaming.
Increased resolution scaling on low resolution montors actually makes games look much better, although not as good a a proper resolution would. Still surprisingly nice.
of course the cpu's are relaxing, high resolutions are gpu bound BUT what happens when gpu power catches up with those resolutions? ( sli 1080 ti ) Then you are screwed by your choice of garbage cpu
That's not the point of a low resolution test. Low resolution shows you how well ryzen will scale up in performance in a specific game with a more powerful GPU.
+dogen Butdoesn't show how will Ryzen or 7700k will scale up with more cores and threads. Things change. Sometimes change revolutionary that much you can't compare one age to another. There was a time people tried to predict how will single core and more core CPUs age with benching single core optimized software. Do you think those benches mattered when software started to get optimizations for 2 and more cores?
God damn you AdoredTV! (Sounds wierd not knowing your name in that sentence) I just thought that buying a 7700k now would be better for me then a 1800x (or 1700x) because of the huge difference in fps and raw GHZ's ( you play WoW, so you know that WoW loves raw power, coming from a 4790k now at 4ghz a 5ghz would make suramar much smoother) but now you tell me, and show me, the ONLY ONE with facts, that buying a 7700k now is not a good idea and I would regret it over time.. now, i am in doubt, again! haha.
Jin Shepard keep an eye out on ryzen for the next 6-8 weeks. there will be lots of updates in bios, windows, etc. these could affect ryzen's gaming performance.
More cores is the future industry is at a wall. Intel would have never let AMD get this close. Even Intel will need more cores and better software to keep people BUYING new chips!
Adored TV... why do I feel like you are the only sane voice among the youTuber tech press? You deserve 10x the number of subs you have... I wish Gamers Nexus would take this information in. Top guy!
It seems to me - as is not at all surprising - there's a lot of extreme polarisation here. To say that AMD Ryzen is bad seems somewhat far fetched, they are clearly good CPUs; but then so does describing it as a MONSTER than will DESTROY the competition, literally words I've seen used in multiple videos. Seems to me there's childish fanboyism going on both sides; crucially seeing this from both sides where many of the people clearly have the tech knowledge. One side exaggerates how good it is; and one exaggerates how good the competition is. You may ask how do I know this? Not being so tech-minded. Well, the use of words, and the fact that these knowledgeable people are giving us contradictory evidence. Another thing tech people do is become very anal about minor 1-3fps differences and that kind of thing. Take away from this is tech savvy people aren't immune to fanboyism, and oh how obvious that is watching so many channels now.
I dont think they exaggerate Ryzen perfomance. If you have a look at this Website which testet Ryzen under Linux you can see outperfoming even Dual Xeon setups. www.servethehome.com/amd-ryzen-7-1700-linux-benchmarks-zen-buy/
is this the wendel linux ryzen setup review, i heard him saying that windows need to include new microcode and instruction for ryzen to improve since the task scheduler is a mess right now and games with update for rizen cpu will make it much better.
Havent heard about a setup like this because i ve read the article today. But yes i ve also read that the current state of Windows needs to be expanded codewise for Ryzen.
The RYZEN cpu's are 100% be a monster that would destroy the competition. In productivity it wins in price, performance and power consumption. The thing is, that the gaming and productivity benchmarks aren't aligned. The game benchmarks aren't aligned because game and benchmark programming are different. A benchmark always has the same type and amount of work, so AMD can make sure that their CPU will work best with it. A game however, has to be optimized for the platform rather than the other way around, because the workload is far more dynamic.
Disclosure - I am a software engineer (also got a computer engineering degree in college, but I wear the software engineering hat most of the time). TL;DR gonna tear on ryzen a bit then make some concluding comments on possible optimizations to cover for this. A couple of comments. First, those handbrake benchmark results where the ryzen cpu was faster than the 6950x are a bit misleading. One of my coworkers has a 20 core xeon in his home desktop and according to him, libx264 and libx265 (what handbrake uses for h.264 and h.265 video encoding/decoding respectively) don't scale well beyond 8 cores or so. This is mainly due to the fact that you can only cut a video frame into so many chunks and still maintain a good compression ratio, and for various reasons doesn't have the option to cut files into multiple sections (essentially assigning video on a per keyframe basis instead of subdividing each frame). This is available in the proprietary version of libx265 but not the open source version which handbrake uses (as its an open source application). Second comment is actually about a followup article to the techpowerup article you have at 12:50 (www.techpowerup.com/231268/amds-ryzen-cache-analyzed-improvements-improveable-ccx-compromises - Go/s is gigaoperations/s). There is one (major?) deficiency of the ryzen memory subsystem that might haunt this year's ryzen cpus but could be fixed in the next silicon iteration. Apparently, the CPU caches have a fairly high latency compared to their Intel counterparts, and a much lower throughput on the L1 cache. This will hurt applications with either of these two memory access patterns - operating on small datasets which can reside in cache for a significant amount of time will not be able to be fed into the cores as fast, or if the application's memory accesses don't have good temporal or spatial locality. This could easily be what's hurting ryzen's results in those low resolution benchmarks, as there are some operations in computer graphics which can't be performed with either spacial or temporal locality. The latency on the L3 cache is specifically another win for Intel. At least on Intel's CPU architecture, the on die PCIe controllers can read data out of the L3 cache directly (www.intel.com/content/dam/www/public/us/en/documents/white-papers/cache-allocation-technology-white-paper.pdf). This easily (probably) is also the case on Ryzen cpu's, but as the cache latency on L3 is 2.69x, it takes longer to initiate an operation. If the operation size is small, this is going to incur some nasty penalties, whereas if the operation size is larger, the latency is amortized, but real world loads are usually a mix of the two. So, some concluding comments. Why is the ryzen cpu faster than the Intel CPU on the video encoding benchmarks with identical core/thread count if the cache "is so bad." Well, that's because a workload like video encoding is very cache friendly (as streaming workloads are)! It's trivial to prefetch the next cache line (spatial locality case) while processing the current line. This is actually the best case workload scenario, as the size is also enormous so the latency is entirely amortized. So, how does one cover for higher latencies - do other work while waiting on the memory system, or have a lot of work to do on datasets which fit in the cache. The newer graphics APIs (DX12, Vulkan) can actually help with this as you have much higher control over the memory allocation (thus organization) and transfer patterns. Attempt to group transfers by having data which is commonly updated together be located together. I could go on for pages and pages. There are things that can be done to reduce the effect of more latent memory, but you can't compensate for everything. Intel's implementation reduces the cost of memory operations which aren't done optimally or can't be done optimally. This is a fact. This doesn't reduce the need to search for more optimal solutions to these problems but it does mean Intel will perform better when it can't be. Feel free to respond constructively...
On the cache thing and smt, from what i understand the problem right now its sees cores as cores and threads as cores, so in essence it thinks everything it sees has its own cache so when it jumps work from what it thinks are cores to cores is actually stalling the work. But also on the cache isnt it a fact that ryzen isnt using l3 cache like general l3 cache, its using l3 cache as a slave cache so the latency issue shouldnt be a problem as the do-das (yes very techincal i know) means it learns what your doing and refines the flow of work through it, which is why even now in cinebench if you repeat the test over and over you find it actually gets faster as the cpu learns what your actually doing. Something that wendell also pointed out is that the voltage controller isnt actually working right in ryzen so its just running at default, so once thats actually sorted that should also have an effect on ryzen, not sure how but you seem to know what your on about so could you maybe touch on that.
The perceptron based prefetcher is actually really cool, it comes from the work they did on the jaguar and bobcat CPUs you find in the current generation of game consoles. I wonder how well it works in practice - i.e. does rapidly switching memory access patterns cause worse than normal prefetcher behavior? Can it store different weights for different threads? No idea at the moment... The way the L3 cache functions is actually a bit of a hindrance under certain workloads. It's known as a "victim cache." If a line is to be removed from either L1 or L2, it is written to L3 (victim of cache eviction), and if the data replacing the evicted cache line existed in the victim cache, replace it with the data you are currently evicting. A regular non-exclusive cache design (Intel, non crystalwell) would just discard the data if it were unmodified or follow some defined eviction strategy. u/tuxubuntu actually did a good writeup here - www.reddit.com/r/Amd/comments/5x7oaq/ryzens_memory_latency_problem_a_discussion_of/ . Essentially what it boils down to is this - while ryzen is an 8 core CPU, it's organized as 2 groups of 4 cores. Each group has an 8 MB L3 cache for a total of 16 MB on the CPU. Cores in one group are allowed to read/write to L3 cache in the other group. This creates the latency problem with L3 - while accessing the local group's L3 is fast, accessing the other cluster's L3 cache is slow as it has to cross an interconnect that operates at only 22 GB/s (way slower than the cache itself, even slower than the DDR4 ram the cpu is connected to). You have to wait for the other cache to service your request before continuing. This appears to take awhile. The reason the victim cache behavior is particularly bad, is that you have to perform both a read AND a write across the interconnect. A regular non-exclusive cache implementation would just discard the data without writing it back, assuming the data was unmodified (data that represents instructions for instance, or textures, or read only files, etc.). This would only have cost you one trip across the interconnect versus two (dunno if the cache op can be done full duplex (simultaneous send and receive)). The scheduler improvements that are being discussed are to try and keep threads which access the same memory ranges on the same core group and to keep threads that don't access similar data on separate groups. This would be to try and reduce the amount of chatter across the interconnect. This isn't going to fix everything, as there are plenty of applications which will consume large amounts of memory in non-optimal access patterns. Now, I have little data on this next bit, as I don't have a ryzen system. These are just my thoughts given the information I've found, I could be off on this part. The "better gaming with SMT off" is possibly related to the L3 cache as well. Essentially, SMT is a technology for making better use of CPU resources. Multiple threads are tracked by each CPU core (2 on zen and intel core, 4 on Xeon Phi, 8 on POWER 8). When one thread stalls on some operation (causing a clock cycle or more where the CPU can't do anything) it switches to another thread, which hopefully is able to do something useful in that time. This could evict code and data from the other thread out of the L1 cache, which might cause accesses to L2 *and* L3 (due to it being a victim cache). If these threads are accessing unrelated data, it could easily be causing cross core group accesses, easily causing L1 I-cache evictions, etc. Turning it off is probably helping gaming as it is reducing the rate at which the cross group interconnect is being utilized if the work is scheduled poorly (thread swap rate will be dictated by the scheduler tick instead of possibly every other memory op). A smarter scheduler would probably be able to improve this, but it also means that you want to group related threads together for SMT, but that also means you might loose performance if you chose grouping a task on the second thread versus another core, even if it was on the other core group. OS task schedulers are complicated =P. However, at least work on this front has already been done, as there is code in the Linux kernel to deal with the L4 victim cache on Intel's Crystalwell architecture. Some concepts might be applied to dealing with Ryzen's L3 cache. Just my thoughts.
Ye, its the victim cache i was on about, knew i was using the wrong term, but its using l3 cache as victim cache and then branch prediction to speed it up. Its all very much black magic to me, i learned most of it from pcper and oc3d when it comes to whats going on with ryzen internally. But i think its just gonna be a case that until everything actually slots into place, how it can turn parts of itself on and off, move work around, jump speed up on threads as and when required over turbo clocks, basically maturity on bios, software and os.
With the schedular this is what i was referring too www.reddit.com/r/Amd/comments/5x54ww/my_theory_on_why_ryzen_does_not_perform_in_games/defc6un/ so that should get fixed soon.
ye, i mean i think the problem with the schedular is that as its just seeing everything as a core, each with its own cache, its not got a clue where to send work so windows in windows fashion its bouncing it around left right and center, and its throwing it about so much its degrading performance which is what i think we are seeing in the games. Which would sort of make sense as from what i understand synthetic benchmarks assign the work to cores/threads itself for stress testing, games go by the schedular as in a basically, thats free do this. With the victim cache i think intel actually used this a bit back with i wanna say haswell, but they sort of gave up with it but if i remember right linux actually got it working really well, which could be a good sign for it here. This is probably unrelated and im brainstorming here but i think vega and ryzen are meant tobe teamed, to me the tech in vega is so different it feels like there is some kind of intercornect with ryzen. Edit, you said it was crystalwell not haswell, cool, my bad.
This release was a clusterf***. A ton of issues, which can happen when something is new. Then, the reviewers mess up too in many ways, though some of it is fault of those issues. Also, R7 should be compared to something like 6900k and such. The way I see it, in realistic scenarios, R7 is either the same as intel cpus, even 7700k, or has less fps when fps is already so high the extra fps does not make any noticeable difference. Some reviewers benchmark at low resolutions or settings while using high end gpu and then claim that it is done to measure real cpu performance even though NOBODY plays games like that. One could use cinebench and say that it measures true cpu performance and the results would be the opposite. Both are equally meaningless for understanding actual real world performance. So, having 7700k or R7, you won't see any difference in gaming (though many people have said R7 has less stutter). R7 seems more future proof though, that's what I would get if I was buying a cpu now. Yet, some reviewers say that you absolutely should not get R7 for gaming. To be honest, waiting for R5 is probably a good idea, unless you got unlimited budget.
There is no guarantee that AMD will be able to pull off the optimizations needed in order to 'pull ahead' of Intel. That's of course 'on AMD' and we need to wait and see. Perhaps one of the best indicators of TODAY is the low res benchmarks. Sure, maybe you're right it's not the best indicator, but that's all we have at the moment for anything reliable. The 'press' can say all they want about how it's possible that AMD will have the optimizations it needs to 'pull ahead'. It's also responsible to suggest not buying a product on the 'hope' that it will improve either. The reviews at the moment are all that we have. If AMD didn't want to have this situation, they should have been more forthcoming about the CURRENT limitations of the CPU and what to expect CURRENTLY. I think some people were clearly disappointed that AMD didn't have greater results. Let's see if AMD can get in some of those optimizations and have these filter on down to the reviews quickly. I want to build a new PC on the facts of how it performs, not on the HOPES of how it 'COULD' perform.
There's a guarantee Microsoft will. And 1 out of 3 existing core issues of Ryzen in gaming lies on Microsoft to fix, not on AMD. Microcode and memory speeds are AMD/ mobo manufacturers, but Windows 10 confusing logical cores for physical ones and being unable to use SMT properly is Microsoft's problem.
If you cannot guarantee tomorrow, the best thing today is to compare results on your gaming resolution. If you game at 720p, then go look at that. However, if you get an educated opinion and look a the trend of game engines, they are getting more threaded, not less. We have long gone past the days of single threaded performance being all the matters. Soon, we will look back at quad cores and laugh as they become the minimum spec.
It's more about what's the best buy now for one's particular use-case. As far as your statement on 'trends', I'm reminded of the Windows XP - 64bit version as well as CPUs being able to run applications at 64bit, way way WAY before now, (starting from around 2003). So, yet only until recently have you seen a lot more applications have a '64bit' version. Many games STILL don't have a 64bit client (I'd fathom a guess of about 95% of the games out there right now only have 32bit clients). So, when it comes to the future and trends, I'm very skeptical, as a big part of this is major adoption by enough players in a timely manner to make it a relevant build decision factor. Sure, perhaps in 5+ years we'll see a maturity from Windows, App Developers etc etc to take advantage of a particular CPU's performance ability. Yet, 5 years from now, I'll be on my next build.
Got sent here from a post Avro Arrow made on a thread asking "1700 vs 1700x?" at tom'sHardware. Other forum readers derailed into an off topic argument about "i7 7700k vs Ryzen", ending with Avro Arrow's post that gave a link to this video saying, "This video was perhaps the most amazing thing I've seen in investigative tech reporting…", for Ryzen reviews, and I must concur! This IS the best commentary video report I have seen documenting the facts. I like how you show benchmark comparisons from the last 5 years! It makes the meaning of the Emperor's Imperial Aquila from Warhammer 40K ring damn true! ~Mankind can not see where they going, if they do not remember from where they have come.
DatNinjaGuy TOTALLY! He may have just uncovered the biggest unintentional hoax in CPU reviews ever! This is as big as when Charlie Demerjian exposed nVidia!
This is a good explanation for Ryzen. It saddens me regarding the other Tech Press. You and Tech Deals did not just do benchmarks but explained everything in dept.
this was very very interesting. The BF1 comparison was enlightening. I gave up with that game on my 2500k @ 4.4GHz. Game would use 100% on all four cores and although I could get 120fps it was so choppy with lots of stutter. Can't wait to pick up a 1600X. Subbed.
Sure, it's impressive that the FX 8350 (FX 8370) has taken the advantage over the i5 2500K after 5 years but... it's after 5 whole years just like you said. That's a long time to be waiting for a bump in performance. I don't think people wanna wait until 2022 for their Ryzen CPU's to outperform the i7 7700K for example. Purchasing the R7 1700 solely for gaming is like flipping a coin when you don't know how the future is gonna shape out. Not to mention the fact that the i5 2500K benefits a lot from overclocking compared to the FX 8350, so the i5 2500K is still in the lead for modern games today if you look at it that way. It's the same scenario with the i7 7700K VS the R7 1700.
You need to understand it the other way. He said that FX 8350 took over the 2500K when there was basically no developer optimising games for their architecture, which took about 5 years. But currently AMD is working with 1000+ developer teams together to optimise for Ryzen. I hope this was helpful.
Yep overclock the 2600K @ 4,7GHz and pair it with CL9 2133MHz RAM and boom... It's suddenly a nice beast again. Ryzen overclocking doesn't look to god so far and we know that Intel likes to clock up bit more. R7 can still be overkill for many years. That's why R7 isn't recommended yet and everyone waits for R5 and R3. If those clock over 4,5GHz easily and can be paired with fast RAM, then there's a dam nice competition going on and Intel is shitting in their pants. Even Gamer Nexus said it's not good buy YET for gaming. Some games will be better in multiple cores/threads and some won't. It's always up to the personal gaming preferences and library what you should choose. Some games are played 5+ years
I would say when you can buy a 8c/16t CPU for less than 350€/$ then you should buy it. Also i think the statement overkill is obsolete by now, because why buy a 4c/8t CPU for the same price while saying it is good for gaming but at the same time saying a better priced and much stronger chip is overkill. (This is written in a friendly manner):P
You missed the point - it's always been stupid to drop settings to 'eliminate the GPU bottleneck' when testing CPUs as it doesn't tell you how the CPUs currently perform in real scenarios nor does it give you any idea how they will perform better in future.
Hearing the speculation on the future of Ryzen is interesting, but I'm not really a fan of basing my purchasing decision of a product based on what's to come. That's why I think the press are not wrong for bench-marking at 1080p Ultra with a Titan X Pascal - it's how you see what you're really getting and how much headroom you really have. I hope that the situation is going to get better with Ryzen, as you say, but just as I wouldn't buy a games console before the killer games are released, I'm not going to buy a CPU based on the possibility that developers will take it on and close the gap. I'm going to base my decision on the state of things today, with the growth potential as a tie-breaker.
Well they didnt benchmark on 1080p ultra. What they have done is benchmarking on 720p lowest settings possible, While this may be true to look at the full power of a CPU, this low preset benchmark, in my opinion, is not really reliable because Windows and Mainboard arent able to utilise Ryzen yet.
hello walkman: I'm not actually in the market for a CPU right now, bought a Skylake i5 a year ago and am still reasonably happy with it. I was just making the point that I don't think the tech press are wrong to bench at 1080p Ultra with Titan X P and that I'd rather choose or recommend a CPU based on its performance today rather than hope and speculation of future performance. That said, upon rewatching the video, it seems Adored was mainly slaming these 720p low benchmarks, which I agree are useless to man and beast. :)
I don't care about anyone's opinion regarding tech as I do my own unbiased research. I went with AMD cpu's all my builds and I am currently waiting for my 1800x to arrive, I never lost faith in them and I'm glad this channel nailed the true hidden power of AMD. Subbed, cheers!
I wanted to add to your point on the FX-8350 as i own it. I have been using this AMD FX- 8350 for years now and it seems to be getting better every year i think of replacing it. I keep getting this thought that it was still working and it still is.... Doom kinda got me a bit by surprise when i used the Vulkan API. I can run 4K Ultra resolution and get pretty smooth game play at an average of mid 50's to highs 70's. I have enjoyed my AMD cpu even after it has aged like a fine wine that it is... Your review though has sparked me to upgrade my CPU and whole system in fact. You are most informative and the most well spoken of all the youtubers i have had the pleasure of listening to. TY for your hard work and i hope you keep up the good work. Your work is greatly appreciated and well needed in this thing we call the NET. and here's a sub with a sprinkle of like.. :)
And my FX-8320 has held it's own. Hooked up to a Corsair H100i 240mm water cooler and the right settings, it hums along at 4.6 GHz ON ALL CORES quite nicely. If I had spent the extra $100 for the i5 back then, I wouldn't have water cooling. This was my first foray into such and was deemed worth the experience even if I lose a few FPS in games. It's been paired with this R9 290, which has been a great match -- neither is ever really a bottleneck in games at 1080P at "practical" ;) settings.
Gamers Nexus had it coming, year ago Steve said he is not hoping for ''zen'' to be successful, days before release he was saying ''Don't preorder Ryzen'', than his Benchmarks came way different than any other on the net, and he called Ryzen ''disappointment'' while lil later saying ''Ryzen is done and dusted let's get on something faster gtx 1080ti'' . So i guess no intel payoffs or fanboyism here, and if you say there is you are one dumb delusional amd fanboii....
Grow a skin, people will be assholes, not much you can do about it. Just buy what you feel is the best for your specific needs. And this is coming from a i7 5960x and r7 1700x owner. Can give you tips on OCing it if you get Gigabyte motherboard.
BodethIII not like a care what random dudes with long hare have to say, no fanboii here. I got my eyes on 1600x seems like is gonna be a hell of a deal !
I want to add something to your excellent video. From an engineering point of view, I suggest dear AdoredTV that you look at the R7, not as an 8 cores cpu, but more as a 4+4 cpu (2 CCX). These R7 are in fact 2 cpus on the same die, which will of course have some "problems" with core and thread management. Not so far from the QPI from Intel. This architecture is a server architecture. Look at the various RAM slots population and speed issues we have for now. This is typical in the server world. I also want to mention the ECC support (great!) (In the meantime AMD is also releasing his Naples platform for the server world.) More details will come ...
Adored TV. I can agree with your conclusions based on my own experience. I am running an AMD 1100T 6 core CPU with a GTX970. I overclocked the CPU to 3.9ghz in late 2015. But the fact remains that I am rocking my modern titles at around 80FPS on a computer I built in 2010 for $1,000. I am in process of building my Ryzen7 1800X rig and am looking forward to running this puppy for another 7 years. Cheers
Pretty much my exact feelings on Ryzen. The synthetic benchmarks show a far higher potential than what we are seeing in the completely unoptimized OS and Gaming Software. More to see later, but its the time frame I am concerned about. AMD can not allow too much time to pass without significant gains on this front. Then again, AMD isn't the best with drivers/software ... so I am a bit nervous. I mean, low hanging fruit like being able to CrossFire in Windowed FullScreen still has not been crossed off after all these years even after they made a huge deal about mGPU during the 400 series release. Keeping my fingers crossed.
That is the great thing about Windows. You can get low optimization new CPU working good enough to market quick and then update around it. I can see Ryzen coming to Mac in the fall. I know better optimization has been done and iron out enough by engineers when a desktop Mac hits the shelfs.
This video needs to be shared everywhere! I had a list of videos to watch on Ryzen CPU's on whether to purchase or not, but this video has saved me time and swayed me to buy one.
I think the differences seen in computer bases results' is that they are using a different cpu as their reference in the different graphs presented, so they have shifted the 'goals posts' and the results still scale in the same direction. Its a tactic many companies use with regulatory bodies, essentially shifting the 'goals posts' as mentioned above but keeping the 'safe' space the same to accommodate a shift in product performance. I might be wrong dude, great video as always.
I have to say, when I first heard you saying that the low resolution benchmarks are irrelevant, I was a bit skeptical, but you made some very good points. What I think people should also show next to the average FPS, is the average CPU and GPU utilization. This way you could see how much more power is left in the CPU and actually see the lack of optimization.
Xbox Scorpio will motivate developers to optimize their games for the Ryzen architecture, that is if Scorpio uses a Ryzen CPU ofcourse. My believe is the Xbox Scorpio CPU will be based on Ryzen because that will be a win for AMD on the PC gaming market and a win for the Windows10 ecosystem as well. Just my two cents..
Couple of points though. Looking at averages over multiple games can obfuscate more in-depth results like the FX's bad performance in GTA 5, running on 4 threads. Now true, it's a game from 2013 but a lot of people still play GTA Online. Same like games as StarCraft 2, or e-sports games running mainly on 2-threads. If high-fps is your goal, even a G3258 beats the FX chips. Secondly, there was no indicator or way of knowing how long it would take to properly scale up to multiple threads, and it indeed took until this year (the year it was replaced) for games to show up to properly use multiple threads. So i'm not sure how much of a benefit this is, to have your CPU finally edge out the Intel competition when it's way past it's expiration date. And now with finally games using 8 threads, how long will it be until games start using 16 threads? No way of knowing, maybe because AMD controls the console market it could theoretically happen. Thirdly, the 6900K is also a 8C/16T CPU, not a 4C/4T 2500K. So the dynamic of having higher IPC over threads, compared to the lower IPC and multiple threads of the Bulldozer is completely different. Clearly there is something else going on, rather than a different CPU dynamic, that is causing Ryzen to perform poorly. I don't think the press is wrong in assuming early EFI's or power-state issues are possibly at fault for the poor performance. We could very well see massive improvements from BIOS updates and game patches. That said, in benchmarks like "for honor" the minimum framerates are quite compelling, and indicates there is untapped potential. But the argument: "just increase GPU load to crush the delta" is not a particularly good one. And there is no fault of the press for calling AMD out on such requests. Oh, and lastly the joker benchmarks have proven to be botched by Tech City. But I agree that the CPU is much faster, it just needs some love.
Yes it's true about FX - it's undoubtedly actually still worse in many games than SB, especially when SB is OC'd. I just wanted to put the point across on how the industry is actually changing vs the perception of how it changes. Also, Ryzen is no Bulldozer by far, it's in fact an incredibly fast CPU.
I dont think you fully understand how they reduce the CPU overheads Dex. DX12, mantle, vulkun etc do have a reduced cpu overhead per task performed but by themselves its not the only change. The main difference is in that you no longer have to feed them from just one cpu thread and have full control of the gpu hardware. This means with dx11 and below we're relying hugely on single core performance, because feeding the GPU with more than one CPU thread becomes counterproductive. For DX10 and below I don't think you can use more than one CPU thread to feed the gpu at all. This is not the case with the new APIs. Soon enough there will be a decent variety of dx12/vulkun game engines where the complexities of the new API are mostly removed from the game developer. The reasons which have kept us relying on single core performance for games are quite quickly whittling away.
It's important to do some research before making claims. gpuopen.com/gaming-product/vulkan/ "Vulkan™ gives software developers control over the performance, efficiency, and capabilities of Radeon™ GPUs and multi-core CPUs."
When you bring up the difference in performance and how AMD closed the gap at around the 7minute mark isn't the reason AMD seemingly "close" the gap because the percentages were readjusted according to the performance of the latest contender(4700k)? Its the same on userbenchmark where, prior to the latest gen GPUs, the R9 390 would have sat at around 6-8% performance over a 970 but now shows only a 4% difference in performance with the addition of the 1060, RX 480, 1070, 1080.
Well No1, its a relative percentile graph, not a FPS graph. No2, its using different games. No 3, its using a Titan graphics card now No 4, its at 1080 and 720, so you have no idea what the real world values are. So for instance, if the FPS is sufficiently high (which it should be if your using a TITAN at 720p! then that 8% difference between 8350 and 2500k could be only a couple of frames. which could be seen as negligible. We dont know. It could be repeatable, it could be an anomaly, they might not be overclocked, the ram might be different speed, the games are different - there could be a myriad of reasons that its showing different results. But to assume that its just better now (questionable when you compare other results from the like of Euro gamer and digital foundry, is ignoring a LOT of things which could be different in the test which are undiscernable. Which is one of the points he makes in the video - how can you trust the benchmarks of some tech websites or youtubers when you can't see their methodology? Its easy to just take things at face value that back up what we want to see. Its called confirmation bias.
Richard Smith i mean its not. if u overclock them both the 2500k is on top but the gap minimized. doesnt take from what he said at all actually tho in regards to it seemingly improving as games naturally took more advange of the cores. they were ahead of the curve just with an inferior product i guess
+AdoredTV I'm glad you brought this up this topic (and took this direction with your Ryzen vid so it didn't scoop mine!). I hadn't considered performance increase over time (albeit 5 years). I think it's very obvious 8 real cores is more worthwhile than 4 cores though. :)
Also if you need a moderator please feel free to add my channel name to your list in settings. I won't remove comments unless they are ungodly as they do earn a small amount of currency.
This guy is going around begging for moderator rights on all channels now LMFAO. Get a grip dude, its the internet, you don't need to act high and mighty and try to censor others.
i like that you have a basis for what you say. i hope you are right. one of the few pro ryzen videos ive seen that doesnt feel like its ruled by a fanboy. good video, earned you an intel users sub fo sho. look forward to more
Excellent video. Really like the way you collate all the info the make proper well education and informed predictions. I shared masterplan video on FB I was that impressed with it. Keep up the good work.
So an environment built with Intel compilers, and optimized for years specifically for Intel hardware isn't suited for a 100% new micro architecture who can't use a lot of tools trademarked by Intel in order to bridge the gap unless they [AMD] wanna spend the next 30 years [Defending from] litigation? Oh No!
The reason it catches up is because modern physics engines are getting better at using multi threaded processors. So simply, clock speed x # of cores x instructions per process... so I agree.... the problem is, games are designed for intel chips and graphics cards.
Finally all my fellow budding prosumers, no need to get a 2011 xeon 25xx :D with no warranties whatsoever!!! Heck just go forth with worry free new R7 1700 the best bang for buck number cruncher out there....:D :D ;D Coz gaming ain't gonna make us any new $$, it is the content we give out to the world, be it monte carlo reports on spreedsheets or R or videos on how to use new tech, its all about efficient x86 threads, and Ryzen delivers just that.
You said Ryzen needs a better GPU Looks what happened when that GPU turned up Nvidia GTX 1080 Ti CPU Showdown: i7 7700k Vs Ryzen R7 1800x Vs i7 5820k www.eteknix.com/nvidia-gtx-1080-ti-cpu-showdown-i7-7700k-vs-ryzen-r7-1800x-vs-i7-5820k/4/
Some people are calling you an AMD fanboy. LOL This AMD vs Intel is getting crazier than politics. The press is pushing the 1080 gaming results so hard that I haven't even seen much else reported on Ryzen. I highly doubt that Intel is influencing reviewers, but things are definitely warped. People also neglect that VR already fully utilizes 16 threads! The look at the lower gaming resolution theory was very interesting! Also, I have wondered about processors having latency limitations at very high fps that limit them in a way that would never happen with an even greater workload at a lower fps. But, that's just a curious contemplation, not even a hypothesis.
Every time I start an Adored TV video its like "ok better clear my schedule for the next hour". Not to say that I think the videos are too long, there's nobody else that delivers in depth reviews on this level I just don't always have time for it, what would be nice is a quick summery so i could get the gist of it and come back for the full video when i have time lol.
Thank you for this bud, I've been going crazy at some of the other reviews out there, almost like common sense suddenly became a rarity so nice to see you restoring some balance in the vid. You sounded nicely pumped up near the end as I was myself, great video ;)
Jokers results on the first ryzen video for the i7 7700k do not match other reviewers results, 20-50fps lower - so that is an anomaly at best, fishy at worst. You can argue that you wouldn't spend $500 on a cpu for 1440p or 4k gaming (where an modern i5 works) but you would for high framerate gaming. And I wouldn't call 1080p low resolution. You want to test the cpu to its limits for gaming; and 4k gaming or even 1440p gaming does not do that. Reviewers first and foremost need to test actuals and then look at speculative scenarios (and given the tight deadlines there wouldn't have been much time for that). As far as I can see AMD botched the launch and frustrated reviewers. AMD set the wrong expectations for gamers and got hammered when those expectations weren't met. AMD should have done more to set the expectations of broadwell-e on launch rather than doing gaming comparisons against a 7700k etc.
"Whenever you find yourself on the side of the majority, it is time to pause and reflect." - Mark Twain Especially if that majority copied methodology one from another. Joker published video in response to Tech City. Count as response to all mudding him. Ganging upon man with different results sounds very couregous. Especially with new platform launch, BIOSes all over the place, Win 10 scheduler confuse. Joker had his RAM set to 3000Mhz. Stevie couldn't. Should I say Stevie is amateur? Probably not. I would blame Asus C6H but since Stevie claims it's not,it's Ryzen I'll say he just doesn't know how to set RAM.
+animegamingdude They compared various Ryzen cpu's in gaming scenarios against intel cpu's showing it beating Intel I7's- something they pretty much were never able to do in any gaming scenario for the last 5 years. In 2017 where we are seeing heavily multi-threaded (lol I imagined myself saying that with a scottish accent for some reason) games, what do you think was going to happen where you see it struggling against i5's. You can argue until the cows come home that it will improve (I'm sure it will but I don't expect i7 gaming performance at high framerates for a year or two still), but for goodness sake you don't set expectations that can run away like that and not tamper them down to reality. Its better to surprize on launch than disappoint (unless AMD made astrobucks from pre-orders in which case I suppose it was a success). But the worst lot in all this were the people going nuts at Gamers Nexus - that was disgraceful, /r/AMD has become a joke - rather than it being a place for people that follow tech that AMD release it is a place where people worship the AMD brand.
Adored, you are going find out that people who tell the truth get burned at the stake or put under house arrest. On the possitive side of things you will find that you will have a higher quality viewer base and they will be more loyal. Money and politics makes for strange scientific results, alway has and always will.
Hardware Unboxed did test the 4+0 and 2+2 simulation of core combination, it shows that CCX configuration does not affect performance of Ryzen. Just like you said man, it is only a matter of OPTIMIZATION. Back in the day, we used to have some old games ran faster on Pentium 4 Prescott rather than AMD Athlon X2, simply because not many games back then was build on multicore setup in mind, only fast single core. Same story happens here, nothing new.
I am seeing endless smug "rebuttals" of this (very well researched) video, with one even saying that if you don't believe 1080p is the way to test, then unsub from his channel. That particular content creator obviously likes to live in an echo-chamber. Honestly, the Tech Press seems unable to do any real research any more. AdoredTV is one of the few channels that I can rely on.
That's what you took away from that video? How are you managing to completely ignore all the testing results, which would fall under "real research" by any definition. Hard data trumps speculation based on second hand sources every time. Also note that Hardware Unboxed agrees with AdoredTV when it comes to Ryzen, but he definitively demonstrated that the FX8370 in no way trumps the 2500K even today.
barfyman362 Lol you're smoking crack, 100%... Ryzen 1800x is on par with the 7700k in single-thread, and crushes it in multi-thread. RAM issues, BIOS drivers, and optimizations will certainly make you wish otherwise if you own a 7700k or buy one now. Every benchmark says this, just games are an issue right now... People like you kind of deserve to fall flat on your face though!
no it is not. for gaming you buy an i5... its been like that for years and intels fans are now saying that because amd release a new cpu. gaming = i5, bang for the buck = i5 an i7 is only recommended if you wanted to do some lite rendering/production work if you didnt have money for 6/8 core cpues. meaning if 1700 - 1800X you dont need an i7.
When I bought my q6600 in 2007 I was told the C2D E6850 would be much better for gaming and it was. I clocked that sucker at 3,2ghz on day one though, and just finished resident evil 7 on it without a hitch. That's what Ryzen is to the 7700k right now, only it won't take 10, or 5, or even 1 year before that will all be clear.
Elie J-louis his methodology has always been the same. you cant blame him for showing you the numbers for a product that happens to be unoptimized.. for now lol
Elie J-louis I don't see why people are getting on Steve so much. He is very thorough in his testing, and he is allowed to have an opinion. I can't imagine anyone actually believing he skews his results. And you can see why he has his opinion based on the numbers he got, what he heard from AMD, and the belief he has in his methodology. Would like to see him and adored debate one another
The fact alone that he adds 1% and 0.1% lows to avg framerates instead of just using "minimum fps" says something about his thinking. minimum fps can be something unnoticed, like background loading or menu opening. But 1% lows will feel stuttery even when the avg framerate may be high. (Comperable to a degree with microstutering in multi-GPU configs, you ee that awesome high 3 digit fps counter, but it still feeels like 30)
Great Video! If I was in the market for a new PC I definitely would go Ryzen it is just a better value. Everyone seems to be saying the 7700k trounces Ryzen in gaming but they forget to mention it also trounces Intel's own 6850K/6900k which are considerably more money in fact you can get 3 7700k or R1700's for the cost of one 6900K but when they mention the 6900k they point out its obvious advantages with rendering and content creation which Ryzen also has. I guess it is true though that many people simply use there PC for gaming and nothing more except maybe web viewing where the 7700k and even the 6700k still win on current games. Still value is everything to me and Ryzen has that in spades. Game performance is still great even if a notch below Intel currently.
only peasants play today in 1080p, real gamers, like me, play at 1440p with 144Hz refresh rate freesync :). good and on point video but not to many tech reviewers seems to get it .
Except....over 90% of Pc gamers play at 1080p or LOWER, so chill out on the peasant talk - thats the vast majority of PC gamers your talking about. You and i are by FAR in the minority with 1440p 144Hz and 4k screens.
I still don't understand why anyone expects this CPU to game well. It wasn't made for gaming, its a workstation CPU. Its like buying a truck and complaining that it isn't as fast as a car
Joao Gamito In the video I watched, they made sure to point out that in their opinion, they could squeeze out minor performance tweaks but their analysis of Windows and the CPU usage log suggested that Windows is distributing load across cores correctly or at least there isn't much to be had in optimization because of those reasons. Not sure which video you watched, perhaps their official "review" of ryzen? I'm talking about the one entitled "no silver bullet".
AdoredTV did you take into account any other benchmarks or data other than what was quoted in the video? I ask only because of the response video by hardware unboxed... As an end user, I'm trying to get my head round what data or testing methodology is relevant... In that video, he retested what you quoted and came to the opposite conclusion. Would love to hear a response and thanks for all your work btw.
Basically it's all down to settings and games chosen. HWU had their 2500K at 2133MHz RAM while Computerbase had theirs at 1333MHz. Both are outliers but I feel 2133MHz on a 2500K is an extreme outlier that basically nobody ever used. 1600MHz was by far the most common memory for the 2500K, and what I'm benching mine at. The big difference was Computerbase tested all modern games whereas Steve didn't, he had one or two older, severely single-threaded games which tipped the balance hugely in favour of the 2500K. There were other issues ie benching inbuilt benchmarks instead of actual gameplay, ie in Deus Ex and GoW4, both of which showed huge wins for the 2500K that just don't exist in actual gameplay. In the end with the fastest GPU available and a lot of cpu-limited games, inbuilt benchmarks and highly overclocked memory, Steve at HWU only found the 2500K around 20% faster overall at 1080p, which isn't that far off what other sites found back in 2011. Removing ARMA 3 and the inbuilt benchmarks would probably put the difference closer to 10% rather than 20%, which is around which most sites see today with the 2500K vs FX. There's more but I'll put that into a video if I ever decide to do it.
Thank you! In all the benchmarks I have seen, when trying to show a CPU bottleneck, no one ever shows CPU usage... Its usually a really quick indication of the game or the CPU holding things back.
Final edit: Turns out, the visuals just don't go together with what he is saying. He is telling you the right info but sometimes showing something different. READ THIS!!!!! GaAAAAH. Adored. You made a mistake. When comparing the 8350 to the 2500K, you were using the most powerful CPU as the reference, rather than the 2500K itself. The better the CPU at the top, the lower the relative percentage difference will be between anything lower than it. Let me give an example. Compared to the 3960X, the 2500k sat at about 90% and the 8350 at about 80%. You said this was 10% difference but ACTUALLY it isn't. Not Truly. Because if the 3960X was 10X faster than it was in those tests, the 2500k would be 9% as fast and the 8350 would be 8% as fast, meaning there is a 1% difference in speed. We both know that those numbers are bogus. The 2500K did just get 1% more frames than the 8350, yeah? Anyway. When you compared the later results, you used CPUs that were more powerful than the 3960X as reference, meaning every percentage difference below it shrunk relative to each other. I am not saying the 8350 didn't improve, but you are over representing the improvement by using different CPUs for reference, where you should be sticking to the 2500K as the reference every time. EDIT: With this in mind, the 8350 would be even further ahead at the end, actually. This means that the improvement was far greater in reality than you showed.
Look at the % values when he hovers over. Those are dynamic. And also his mentioned values are not the ones shown in the video, it is just an indication where those chips are positioned.
I didn't find it confusing at all. The chart is dynamically updated and the bars appeared to be in proper proportion. Since the origin remains at 0 it isn't a problem. It truly is a spectacular analysis.
Dear Adored. Read please this article: www.pcper.com/reviews/Processors/AMD-Ryzen-and-Windows-10-Scheduler-No-Silver-Bullet Actually, this guys from PCPER assures people, that there are no problem in Windows 10 optimisation and they found a real weakness in Ryzen architecture, and this weakness is AMD's Infinity Fabric - the brand new building block interconnect behind Ryzen CPU that have place between ryzens 2 modules (4+4 cores). What do yo think about that?
I think this video - ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-JbryPYcnscA.html proves that PCPer are yet again wrong and spreading FUD about an AMD launch as per usual.
And now AMD itself "don't see any problem" with Windows 10 scheduler. What the hell is going on: community.amd.com/community/gaming/blog/2017/03/13/amd-ryzen-community-update?sf62107357=1
watch their followup video, they contradict themselves enough that they aren't actually refuting anything. after long argumentation solely based on the assumption everyone was solely focused on smt alone and not the actually real theory and evidence that it's the fucking cache and ccx awareness, they later concede those exact god damn points towards the end of their video. like it was a tangential or unrelated aside. it all came across as condescending postulation and ultimately rather contrary. It's not only reasonable to expect the scheduler to be cache aware and schedule accordingly it's already been done in other OSes I.e Linux where AMD themselves have submitted kernel patches. It wasn't about their theorizing and limited testing, they aren't wrong per se, it was the way they framed it. it was rather ad hominem to the actual discussion +adored tv i believe that vid is from a guy on the anandtech forums? They have an extremely thorough thread going, throwing process lasso at it, registry hacking and fucking with affinity, revealing behaviors which do rather contradict pcpers conclusions, their argument in a nutshell was "it's not broken it does treat cores and threads as cores and threads" well no shit, that was a day one theory that evolved into the realisation there was no rhyme or reason to the schedulers thread allocation, and it was clearly not cache aware, which is the point we've been trying to make, then they later state, well yeah maybe Windows should add that, "but that's adding a feature", "it's not broken".... then the self confessed "guru" goes on to postulate about how the scheduler probably surely had some kind of adaptive deep learning behavior for assigning threads... Which proved they have little idea what they are talking about. infuriating really
Вадим Ерем Also I don't know what planet are your from, if you know anything about PR AMD is not just going to point the finger at Microsoft or create any unneeded tension, there have already been statements they're working together to resolve issues, admission there are issues, why does everything have to be binary and combative, because at this point it's ridiculous, it's a new platform, denying it has issues, and claiming it's fully supported in the same video you concede it could and probably should receive more support is ludicrous no?, they didn't refute anything and seriously watch their video through, and realize they are ultimately actually agreeing.
Mate! you are superstar, your reviews make so much sense and you explain it so well, Your research is immaculate. Cant wait to see your reviews of Ryzen, I know you will do a great job of putting it through its paces and explain it really well. Just built my 1700X & GA-AX370-Gaming 5 kit last night with fresh windows installed and 3200 Corsair LPX memory to match and damn you can really hear that processor crunching the numbers. I have only tested one game so far and that was Mafia 3 in 2560x1440 Res, I was getting over 100 FPS in ultra settings and we all know how glitchy Mafia 3 is but the 1700X puts this game in its place and makes it run really smooth and i have not even started overclocking yet! I have upgraded from FX 8320 to a superstar 1700X, mind blowing.
Like always people judge AMD but they cant see that all companys wants to work with Intel and i am happy that now companys can see that AMD came back and harder then ever. Good job AMD. Keep up the great work.
You say we can't use low resolution to test games based on a chart (that is not yours) that has been disproved by Hardware Unboxed. It feels like everyone is pulling their own way about this ryzen controversy.... don't get me wrong, i'm all for budget performance CPU's but this all seems like a load of crap