@@ShiroKage009 well yeah if its a super complex scene maybe... maybe... but the renderers afaik are smart enough to swap things in and out of memory... dunno what extreme case will saturate a 4090...
Great video; these CPU's are getting insane these days, and the performance is out of this world - although one does question their highly limited use cases. Not sure how to feel about the ultra high end TR's... from the perspective of a professional engineer who uses things like CAD/FEA, the best takeaway from this video that I can see is that for the majority of users - even those doing things like rendering; video editing; sciences and engineering (eg. CAD / *basic* CFD/FEA), is that even the high end consumer CPU's (7950X/13900K/14900K) are probably *good enough* for most use cases. If you work for a technology company (ie. science/engineering/tech/ai) - even a relatively small/medium one - chances are they'll have some sort of corporate alignment to cloud server/software providers, and for highly intensive tasks will typically rent capacity on an as-needed basis. I've seen this increasing as the years have gone by - more and more companies pushing services which have historically been done on self-managed servers/workstations, to the cloud. Eg. if you've got to run some *complex* CFD or FEA, most large corporate companies these days would rather temporarily rent some server capacity with whatever is needed (ie. multiple high-end CPU's; GPU's; memory etc)... spin up the CFD/FEA instances and perform the analysis in a day or two... as opposed to like a week or two on a single high-core CPU system (like the 96 core Threadripper). Heck, some of the CAD/CAM/FEA software providers even offer cloud based services like this today, with the click of a button. Partially this move to cloud stems around cash flow and taxation benefits to the companies - i.e they're not having to maintain the systems themselves with a larger IT department, nor having to worry about depreciation on expensive high end workstations idling for a large percentage of their life (eg. when people go home at the end of the day for work... if it's not crunching numbers over night, it's costing you money... and the effective lifespan is pretty low, given how fast technology is evolving). Heck, these days companies even want to be able to scale up/down their workforce depending on whether they win specific contracts or not. Even the software providers of CAD/CAM/FEA software understand this, allowing *monthly* seat rentals for their software, to allow companies to adjust their capacity as required, rather than a lump sum payment for software that may only be used for a fraction of the year. While I think the "cheaper" TR's might sell okay (given the large price difference compared to something like the 96 core), I still can't help but feel that the top end consumer (prosumer?) CPU's are now "good enough" to use for workstations. There will always be some niche use for the large TR's (ie. 64/96 core)... but I think they're few and far between. Now, I know someone will say something about the amazing amount of PCIe lane bandwidth, and how it's necessary for using them for things like large data storage (almost like a NAS on roids)... and you'd be right - they could handle that well.... but any small/medium/large corporate business is just going to buy *server* grade hardware (eg. EPYC; Xeon Scalable etc.) and not workstation hardware for this use case. Or they'll shift it to the cloud.
some of the stuff like video editing it's more convenient to do it locally on your own workstation because of the massive data transfer, as for CFD you still be well off doing some of the test runs on your own machine before sending the production runs to the cluster or cloud.
7950X/13900K/14900K are all 196GB dual channel memory. Threadripper 1TB quad and 2TB octa channel CPUs. Xeons are 2TB and 4TB quad and Octa channels. There is a night and day difference in solve time between these platforms as you increase the number memory channels AS WELL AS memory capacity. Yes, they do come with a premium and cloud solvers are an option. Software that does provide cloud option is not cheap though.
I know an adequate comparison is asking a lot but I'd have liked to have seen this compared with last gen TR and Sapphire Rapids W. That said I totally understand not many people are going to have those platforms lying around for testing so while not optimal this is appreciated.
Do you plan to cover Threadripper PRO 7000 as well? I was able to grab a 7985WX, though I can't do a thing with it because there are no WRX90 motherboards to be found.
According to rumors from a guy who doing grey import to Russia from Asian countries, wrx90 boards expected available around Feb-March. And you can already get a pro CPU alone. Waiting March to get 7975WX+mobo.
@@falsevacuum1988 Wow... I really hope February/March is not right. That's just crazy. Otherwise, going to be running a TR PRO on a TRX50 I guess... which mostly defeats the purpose of getting it.
@@13Cubed This guy said he already can ship me Asrock TRX50 (to Moscow around 12 days), Asus will be available in few days later. Looks like wrx90 Pro boards are still in production/testing and only engineering samples exist.
At around 15:50 , the LongGOP [EXT], how the flying F is 5995WX's 102.6 only 1.33% more than 7980X's 90.5 ? That's 13.37%, elite performance improvement! I hope this is the only mistake, I'm too lazy to check them all.
It is nice but to expensive for me, as I only use Davinci Resolve and I guess the (hopefully soon) comming 8950x will bring a big boost over my actual 4000 point score (with a 7950x), and this is actually way more than what the 64core CPU delivers. But I think with new drivers, Davinci Resolve will be much faster with your current monster CPU. I realized that 11 months ago, I was getting only 3183 points in DR while I am getting months later over 4000 with the same Hardware, so drivers and new versions of DR, have a big impact and when the your CPU is so unstable in DR, then it sounds like driver problems to me. I would be glad, if you could retest it in 5 months.
Finally, where a high core cpu really shines for personal use case : rendering, you can just use a good gpu to do it faster and better. In my opinion, you really need specific use-cases to need this type of cpu, other than that you just want high frequency cores.
I can't wait for the new 32-core Threadripper 7970x video. I have a feeling users with specific workloads like you mentioned (a lot of 3D rendering or huge amounts of Lightroom exporting) are going to drool over it. I think I am going to hold off on my 7950x build to see the results. It's looking pretty close to a dream machine for my workflow! Thanks for what you do!
Even if you decide the TR is not for you, I'd hold off on the 7950x, unless it's time sensitive off course, because pretty sure the 8950x will be out in Q1, according to the tech leak tubers. The current engineering sample benchmark leaks are pretty promising.
I got a notification with a question. I can't see it, but let me answer anyway. Yes, the 8000 series on the new Zen 5 architecture will also be on the current AM5 platform too. When 7000 series (Zen 4) was released they said they would run AM5 for a few generations, same as the previous AM4. People are hoping that that means that will go up to the 9000 series too.
The comparison of such a monster as the Treadreaper with the Intel of the 14th generation is simply not for its intended purpose, they all have their own appointments and specifications and everything is ordered by different companies, including Adobe
For VFX studios, this is the real deal. Video and photo not so much. Blender, Maya, Houdini, Nuke, they'll will fly on Threadripper. Also, this is just a constructive criticism, can you please have charts and graphs next time to represent the benchmarks? It makes it a lot easier to understand the differences.
Its not a photoshop, light room chip . its for LARGE 3D SCENE RENDERING WHICH IS OUT OF YOUR GPU VRAM LIMIT, ALSO this DONT CRASH OFTEN AS GPU RENDERING.
To be honest i think the longivity of the socket support is not that big of an issue. I thought about that different in the past and was hyping the longivity of the AM4-socket. If you need an upgrade due to more cores , more RAM, you would probably trade-in the whole base (Mobo, CPU, RAM) and replace it. its a diffrent story selling a CPU in the 5K+ class vs. 500 class. if you add the mainboard and the RAM it wouldn't be the big difference but the buyer will get a combo which prooved to work allready. In the consumer area you are replacing perhaps a 6-core with an 8-Core X3D, but if you have basic Mobo you probably would get also a decent Mobo for the better CPU, same to RAM.
14:50 The number of (significant?) digits in your results doesn't seem to make sense: -4.7% vs -12.46%? Two digits vs four digits?! What's the error margin?
I don't know if Lenovo was dumping old W5945 Threadripper Pro's or the motherboards were outdated in their Thinkstation P620, but I bought one on sale last year for $1495. I am very happy with it.
This just goes to show how badly written Adobe software actually is. For years now, I have believed Adobe takes backhanders from intel to keep the performance the way it is.
When GPU rendering is not an option for example, when you're out of VRAM and need to render on a CPU. Or a Simulation where GPU use is not possible. For video and Photo editing, it's very small % of creators who can make this useful for their workflow :)
@TiTechno you are reaaaally far from reality :D CPU rendering is still the industry standart ( I can't tell for how long ) but until somebody figures how to fit a scene worth half a TB of RAM to be rendered on a GPU, pretty much every vfx and archviz studio will stick with CPU rendering.
@TiTechno lol, sure. Even if we say somebody uses $50k card for mackhine learning in the cgi industry, it's still about 40 gigs of ram short. Why do you think every render engine based on gpu is pushing "out of core" development?!
Oil exploration, wall Street firms running financial simulations, universities running physics and chemistry simulations, medicine, and movie studies who do full lumina path ray tracing which rtx only partially accelerates. Alot of these but not all can be on the cloud with AWS or Azure these days. The demand for these cpus are going down but is not totally eliminated. For us just for bragging rights 😂. A regular Ryxen or 14700k would be better suited for Adobe stuff and games
You have hardware for individuals and for teams. If you are using TR to render, an individual workload would be a waste, as an individual will not produce enough to fully utilize what the cpu was made for. Meaning, those few seconds or minutes you saved, goes right back to the most time consuming part, and instead of getting a break while rendering, your render is now complete, get back to work.
I don't expect java minecraft to last much longer, so the point might be moot, but now that AMD has mastered the multi-core processor, can they consider their single core performance. One of those 32/64 cores should be capable of slapping the 14900k in the chops. I've been an AMD CPU fan since the AM386, and while it's great to have the cores when you are modelling the universe, it'd be nice to see AMD truly tackle single core performance like Intel. Or have someone mention to Oracle that ''Umm, Oracle, there is this thing called multicore, been around since the last century''.
1st problem is running windows on threadripper. We have several workstations with these, including a brand new 96 core pro. We also use linux, which is far superior to microsoft. We are an engineering firm running AI/CAD for robotic equipment design. The xeons can't even compete, or we would use them. We also have our own software engineers who can modify linux to suit our needs. Can't do that with Windows.
Adobe software sucks for CPU scaling. Which has me wondering if my planned upgrade from a 3900X to a 5950X to max out AM4 is worth it. ? Lightroom sucks. But it may be useful elsewhere.
I would love to know the performance in simulations like Phoenix FD and Tyflow, is it 3-4x the performance of the 14900K on these tasks. anyone here has it and use these plug-ins?
The thing that voids the Threadripper for use for me is the limited memory slots that all of the board manufacturers are doing. I need lots of RAM. More so than Cores. UE5 work. So I skipped the TR and went with the Intel W7-2495X 24-Core with 512GB expandable to 2TB and an RTX-3090. I plan on bumping up to 2TB as soon as the big DDR5 starts to show up. Threadripper is also always really expensive in Canada. Nice processors though.
@@Teluric2- Unreal Engine 5. I do client support for massive world creation using UE5 and TerreSculptor 3. 128GB isn't enough. The 512GB that I currently have is barely enough. I need 2TB or more.
@@calibrastorm- Hi. The motherboard I'm using is the ASUS Pro WS W790 ACE. It has good features, and 8 memory slots for up to 2TB of memory. No Wi-Fi and no Thunderbolt though. Also, be careful what NVMe you use with it as the QVL lists some drives that simply won't work in it. The ASUS also has better PCIe slot layout than the ASRock W790. The ASRock has WiFi and TB though if you need those. The ASUS ACE is designed for the W-2400 series, and the ASUS SAGE is designed for the W-3400 series. The new AMD Threadripper 7000 are priced well in Canada, but their motherboards are all limited memory slots, so no good for me.
@@calibrastorm- If you have any questions on the ASUS W790 ACE let me know. Also Wendell at Level One Tech has some videos on RU-vid for the ASUS W790 boards. I went Intel/W790 mainly because of the 2TB/4TB memory allowance. The new AMD TR 7000 TRX50 motherboards only have four memory slots instead of eight, so half the memory typically. I'm still waiting for the 128GB and 256GB RDIMMs to show up here in Canada so that I can go up to 1TB or 2TB on my system.
i wouldnt call it a gamechanger when this cpu can really only do 3d rendering and 7zip. the 14900k just completely demolishes it in everything else thats not 3d rendering, while being a fraction of the price. you are not done with the $5000 cpu alone, you still need a $1000 motherboard and the ram kit amd put in the reviewers box is another $1300. its about 7 times more expensive than a 14900k system while best case being only 2,5-3 as fast. even when you consider the best case scenario, the threadripper gets absolutely trashed in price/performance. the 7000 series threadripper is so niche that its basically dead in the water after the 8 people threadripper actually makes sense for bought their system. the 24core will be utterly ridiculous once you add the $2300 for board and memory on top. even best case its barely faster than a 7950X or 14900k and also loses to the 14900k in everything but allcore rendering. i dont know what amd wants all that money for. people who can afford server grade workstations can buy epyc and be done with it. the first threadripper was a chunky HEDT platform for everyone, now threadripper is a fancy renderfarm item for corporate level money only. over at mooreslawisdead some amd partners commented on the new threadripper like "who cares about threadripper?" - msi isnt even releasing motherboards for it, wrx90 boards come months late, and overall there are just 3 boards announced for trx50, of which none are available at launch. just fabulous.
Haha kinda ironic when people complain about intel are power hunger ineffective inefficient and blax2 when come to CPU performance.. in reality check intel still ahead in most program and performance... Even blast AMD 7950X ... The only advantages AMD over intel only 3D V-cache
@@mashirokobato5509 to be fair tho the top end sku, like 14900k, is an absolute power hog and needs some tuning from stock settings. still, it has some useful unique features AMD has no answers to. and s1700 has a more intelligent lane distribution due to the x8 dmi uplink.
@@sgredsch indeed no hate towards AMD Cpu I love competition in x86 world.. but sadly AMD CPU no near close performance what intel could offer in mainstream market.. the only advantages AMD on Vray and 7-zip ( multicore rendering ) + massive 3D V-cache for gaming performance make AMD Cpu good choice if you're favor multi core performance.. but I hope AMD could lift up their game to add more hardware specific accelerator as intel and Apple did with their CPU..
Nice and informative video, I'm curious about the maximum temperature of the 7980 X during rendering at 100% load. In 3Ds max, it goes up to 93 degrees and is this normal? With 360 mm thermaltake cooling
Great vid but you should add real world tests in Photo editing department. How much less time would a threadripper take compared to the i9 in exporting 30-50k images (with slight editing) from RAW into a Very Highly compressed JPG? Or how much less time will an AI software like aftershoot take to process through an album of say 50-70k photos? Believe it or not I shoot weddings and come home with 70-80k shots and it's really frustrating on my older PC to wait days for it to process stuff. I'm looking for an upgrade and wouldn't mind spending a little for the 32core threadripper if it can deliver performance in this department i mentioned above. And my intel is not even capable of running a 1080p YT video when exporting from LR or processing in Aftershoot. I can't really use my PC for days! i9 seems like a good upgrade but please test the threadripper in these departments.
These numbers are just like i predicted and i knew very well why which is why i been trying to get the most balanced one which i think is worth way more than the most expensive one. Because the more expensive ones have a great flaw.
So if i find fast gen 5000 threadrippers for 1/3 from their original price; it's a better deal? Just for game development stability not on the artistry side.
He does come across as Intel bias, IMO opinion. Not that I truly care, my main work PC is a 10th gen Intel, so really don't have a dog in the AMD/Intel fight.
6 x pci gen 5 = 12 x pci gen 4 at x16 speed (with splitter boxes) + 1 gen 5 at x8 = 1 pci 4 x16 speed. so upto 13 4090's are possible with that amd. only reason is the pci lanes. makes it a killer workstation for blender heh
You should probably benchmark fluid/physics sims for CPUs instead of simply rendering 3d scenes. There are scant fringe cases where a creator would do cpu rendering on a personal computer, since an rtx 3060 would probably curbstomp a threadripper 9 times out of 10.
It would be god if you can make an review of Blender building time before the actual rendering (GPU rendering, CPU preparation). Sometimes the building time can take up to few minutes depending on the project and few seconds to render (I know, optimization) but but... Would be very helpful to know witch configuration has the less time to build (for example blender Barbershop). ☢⚠☣
It is sad that Intel has forgotten the Content Creator customer. 20 lanes of PCIe will get you a GPU and 10Gb/s Ethernet. I have 2 Blackmagic Design 4K capture cards and 2 Avid Audio cards, which is not that much hardware when making livestream content. Sadly I cannot go back to Intel for they have created a choke point in their designs.
I always bought better platforms not modtly for the threads (I usually buy 10-12 cores it's fine) but for the lanes. All the wifi adapters, ssd converters, oscilliscope, radio recveivers I could want. I guess you could usb it all (well most anyway) but fit too much and the standard consumer boards can bluescreen often even when wihtout the stuff they run fine even when stressed to the max idk. Usb root issues? Drivers? Too much powerdraw from ports?
I run 4 dimms at 7600mhz on my 13900k, I had to get a proper hynix a die 4 dimm ddr5 6600mhz kit and a bunch of tinkering but it runs like a monster just chews through data and it's stable, at least stable enough, I get an error between 12 and 18 hours on y cruncher, idk why no matter what I do 🤷♂️ but I've never had it crash during use even while gaming, streaming, recording, downloading a game and encoding a video all at the sametime
Hey love your videos . Watch all the time you upload. It not the render aspect of a Threadripper need for Blender it’s the simulation/ physics and animation. Blender doesn’t utilize the gpu for that.
I feel like none of the software showed what this TR is made for. It is definitely a niche. We are ordering 24 and 32 cores versions. But I would maybe also try tests in Linux as it's scheduler could work better with that much cores.
I went in big in 1993. I got a Pentium 90 (90MHz), with amazing 600 nanometer technology. Well, I could hardly skimp on the cpu when I had a huge 1GB scsi. The best thing to come of it was learning never pay through the nose for cutting edge top-tier gear - coz there's something just around the corner that's going to make it next to worthless :)
A xeon would be a more fair comparison lol. I would pick intel for my virtual machines and games for hime stuff. Thread ripper is a remarked workstation server cpus. For a server or compiling c++ code or doing scientific simulations i would pick a thread ripper. For games even an 11600k can easily outdo the thread ripper
Creators do not even need such specialized CPUs. Desktop high end line is plenty enough for video productivity. Creators make up impractical test benchmarks to justify buying ridiculously expensive hardware which they do not need. Clever ones write it off as their tax write off. However, many dumb people are being caught buying these processors for creativity workloads. Also, these threadrippers are a niche product with less support provided versus mainstream Ryzen brand. So you are buying power which you cannot utilize in a worse support package for your own workstation in an order of magnitude more expensive workstation computer. That is certainly a winning move! Also, Tech Notice itself could not justify buying this CPU for creators when asked in the comments. His response was to use CPU rendering of videos. Like, when it is ever a viable alternative to infinitely more superior hardware accelerated rendering of a GPU?
The issue and Moore's law is dead made a really good point is that what most creators want is HEDT lite. Where you have more lanes then normal for storage and etc but not as much as HEDT.
@@4566Iggy I really dislike that guy. However, even I disagree with him on that point. High end mainstream processors while lacking lanes, they still have motherboards offering immense storage potential. There are specialized creators motherboards like X670E Taichi or more high end X670E Pro art creator. The difference is that those lanes are being shared. It is more than good enough for any practical use for a creator.
With every pc used to make money it is simple, does it pay for itself within a certain time? If yes, it is worth it. And those adobe scores? That is exactly one if the two reasons I font use them anymore. Buggy slow old code, still optimized for 4 to 8 cores. The other reason is the subscription 😂
I dont care much about numbers.. I just need to know one thing... how is the REAL LIFE use in premiere pro :P scrubbing and such 4K - XF-AVC from canon C70 and R5c... :D VS Ryzen 7 series.. vs Intel i9 13900 KS or 14900 KS?
@@unimpressively_charming but the I have the 13900KS it does the same as I understand. But if premiere Pro did support more cores and multicore it could be pretty badass with that new AMD threetripper.
For me for example the true is amd come with higher things when for example I started choosing their good cpu I see these things but it really hurt why if 64 cores 5.0 ghz for example then compare to 128 cores 3.2 ghz who win but even in price too stuned heart it them but wow because when someone love a things it come with better but less cores but with powerful frequency but who win it these threadripper 64 cores I believe win plus optimum boost of gaming plus them wow. For example a person choosing the good cpu when knows it they see 64cores but then will want 24 cores then then they don't know what to choose then they must go with obligation with 64 cores 5.0 ghz and plus their plan is for cool for 24 cores good then will have to work more .
You should be comparing threadripper to Intel's sapphire rapids and then those workstation CPUs and NOT to the desktops CPUs. The price ranges are nowhere near the desktop junk and it's not worth while to compare them especially when the desktop doesn't even have Quad memory channel support. It's sad to see all these youtubers using free threadripper CPUs from AMD and not comparing them to Intel Workstation CPUs. I'm learning who to unsub from that's for sure.
im thinking to upgrade and buy new gpu, i saw "ZOTAC GAMING GeForce RTX 3070 Ti 8GB GDDR6X 256-bit 19 Gbps PCIE 4.0 Gaming Graphics Card, IceStorm 2.0 Advanced Cooling, SPECTRA 2.0 RGB Lighting, ZT-A30710Q-10P" (for 400$) on a good discount (it is not trinity) but I can't see good review about this card, like how does zotac and it's fans does job? does it cools enough? and also is it worth or better to go something else? also people saying that fans are loud like jet engine
You're better off with the 4060ti 16gb and even that isn't super amazing value, i'm sorry but the 3070ti is a horrible purchase decision for gamers and creators alike.
You can make 8 8-core remote virtual workstations out of 64 core proc. It's not worth it for a single user meddling with Premiere. It's worthy for larger teams, it's worth it for medium/small offices with medium/small amounts of workstations. On a bigger scale one should just go with EPYCS.
You state that there is no competition to Threadripper. Not at the top end, for sure but, for fewer cores, there are the Xeon W-2400 and Xeon W-3400 CPUs. By neglecting them you made this video completely useless.
It's very economical as a cheap super computer from just 12 years ago would cost millions of dollars. My niece does medical research with several terabytes of data for neurology with the hospital having a budget of 100k per year on equipment. This is not to play Minecraft. These are designed for large data sets running in parallel. Wall Street runs simulations on financial monitoring and is retiring their super computers. These are cheap in comparison. But for fps? Stick with a 78003d or 13900k for that
@@timothygibney159 hmm, I'm thinking 10% + of you tech budget on a computer may be fine if next year you get another 100,000 or if that money keeps rolling in. but it seems to me that they should look to distribute that compute load. and, perhaps rewrite or write the software to distribute compute to GPUs. depending, there may be several orders of magnitude found, even faster than a cpu with 128 cores.
@@gdotone1 Cloud for sure depending on the workload. GPUs are many multitudes slower x100 compared to a real desktop CPU. This is because they are specialized to do small tiny independent tasks in parallel on 10,000 Cuda cores. If you have a large SQL database or use math data that is dependent on other variables it won't work as one variable is on one cuda where it has the next set of data on another where it has to wait and sync before it can do a calculation. The infinity fabric on a thread ripper is perfect for that as well as it's insane cache and memory bandwidth. Gaming just has triangles with small data points that can sync to each core. Cuda cores are crippled and just do 32 bit math and can't handle decimal points well that are long. Cuda is great for my nieces work with learning models after its computed and can do iterations. But limited ram hurts too. Threadrippers are remarked Server Epyc chips and great for server loads. A niche market and totally different but still cool
@@fios4528 sure but not like this next one - why be smug and satisfied when you are almost sure that in 4 mos things will be signigicantly different - wait to be sure and make an informed choice instead of shooting in the dark
Creators don’t need a a 128core cpu 😂😂😂 They need apples graphic accelerators 😂😂😂 Creators aren’t hungry for cpu compute. They need faster GPU and accelerators specifically for content creation
These are for people like my niece who is a research scientist who does large complex data sets for medicine in Java and Python with C++ modules. Nvidia gimped their gpus to only use 32 but math with no large decimal places. Her data can't scale to 10k Cuda cores because her data wouldn't fit and would be bottlenecked pulling data from caches in different cuda cores. The infinity fabric and large cache and 64 bit integer and floating point decimals from real cores could handle what she does. Cuda cores are great for doing small stuff in parallel where data is not dependent on each other. Sucks for anything else. These are niche for universities, medical research, and NASA, with some Hollywood graphics shops. Not for gamers or geeks