Also, about the Ti in the GPU, NVIDIA pronounces it both ways. Jensen, the CEO, pronounces it as T-I (Tee-eye), while Jeff Fisher pronounces it as Ti (Tie/Ty)
I think the culprit is that Win32 API function: GetLogicalProcessorInformation only supports up to 64 proccessing units, due to using only a 64bit flag value for each cpu. GetLogicalProcessorInformationEx is the more modern one.
One thing on your RAM vs core discussion: L3 cache requirements scale non-linearly with core counts thanks to the increased incidence of L2 cache misses.
That's why the chip architecture is critical with more and more cores. AMD, Intel, and Ampere all seem to take slightly different approaches. I've enjoyed some of the ChipsandCheese articles on these new architectures!
@@shanent5793 It's non-linear and there's an exact formula. Let's say you have a 5% chance of a cache miss per core, so a 95% chance of a cache hit. The percentage chance of a cache miss with N cores is (1 - (.95^N)) * 100. Obviously the chance of a miss - that 5% - is dependent upon the workload. The more misses you have, the greater the pressure. And the fewer the RAM channels you have the greater the effect of L3 cache misses.
@@QuentinStephens that's just the chance of at least one miss. Multiple misses will have binomial probabilities so their sum converges to linear. 128 cores are expected to have twice as many misses in total vs. 64 cores. Either way more cores causes more L3 pressure so why does the Ampère only have 16MB which is less than desktop CPUs with only 6 cores or 12 threads?
@@shanent5793 I'm not sure you're correct about the binomiality but yes, I do agree that the 16 MB cache does seem rather low, especially when we have Epyc CPUs with 1 GB cache for similar numbers of cores.
Honestly I wouldn't at all be surprised if valve would tell us tomorrow that they release a fork of Box-86 and Box-64 build right into steam so to support all steam games on ARM and RISC-V. Valve would be insane enough to do this and there's no number 3 so it allowed.
I don't know, code weavers contacted valve about built in support for crossover on macOS steam and they still haven't done anything about it (source: I contacted code weavers themselves about it and they said that they did pitch the idea and that it is up to Valve)
@@KingVulpesBecause Valve's primary focus is on Linux, not macOS. Another thing is Crossover is a paid product, I find it highly unlikely that CodeWeavers was interested in just providing it to Valve for free without getting a cut, that's probably why Valve wasn't interested. Providing x86 emulation for ARM, however, could directly benefit Valve as it would allow for future low-draw devices, although I'm not holding my breath.
We're finally returning to the RAM situation we had a decade ago, where workstation motherboards had lots of RAM slots. My (now very old) super micro x8dah+-f board has 18 (9 per CPU). IMO, the biggest problem with modern processors is the extremely limited PCI-e lanes available. Look at chip specs over the years, and it's something that has steadily decreased. With Thunderbolt and NVME, PCIe lanes are the most limiting feature on all my computers - even laptops.
Yeah; I have run into that on my Ryzen 7000 series desktop, there are few motherboards that even expose the lanes in a way I can fully utilize them :( The nice thing with this Ampere chip is it has 128 lanes, and almost all are usable on this motherboard! Still always want more, for more IO :)
Still run a 4790k on my seedbox due to this. Haven't found a non-server mobo with 10 on-board sata slots for spinning drives since that generation for any other CPU I bought.
128 PCIe 4.0 lanes is plenty; that's 512 GB/s, more than enough to saturate 6 channels of DDR-3200 with only 154 GB/s half-duplex bandwidth. It's up to the motherboard or backplane designer to allocate them
The issue is that the PCIe lanes are used for M.2 slots and other onboard functions that didn't exist on boards 12 years ago. Back then those PCIe lanes mostly went to actual PCI slots.
@@arof7605My 4790k was a beast-even though I was never able to overclock it, it ran my main computer for over half a decade, and its core performance was *never* the bottleneck. But I'm surprised you're still using it-how do you live with a mere 32GB of RAM? (asked half-jokingly)
3 grand for a 128 core CPU. I remember when Intel used to charge 5 grand for a quad core server. Lol, what an exciting time to be alive. I will buy one in a few years when it's stable and on the used market for a reasonable price.
@@DeltaSierra426 Those limitations are definitely unfortunate. Making a powerful CPU by making it bigger with a bigger socket? Easy. Making a powerful GPU by making it bigger with a bigger socket? Easy. Even if we don't improve the technology, we can add more and make it bigger. But then... Games: I'll use 1/128 your CPU and 1/3 of your GPU.
@@dzello I think making a program able to use the potential of this hardware isn't that hard. It's just that people don't usually do it. With time, and more and more complex software this extra horse power might be needed... Though, there's indeed a limit for consumer grade applications, and crossing that limit is just being inefficient or lazy with your code
RISC-5 is jumping into the fray. I am looking forward to getting my 64 core dev board in December. I am so happy to have this level of competition in the market again.
Thermal Paste: Forget what LTT says, its a physical junction that transfers heat, the larger the contact the more heat can move across it. So you are completely right to spread the thermal paste out. Physics!
Something to note about NVIDIA's ARM binary drivers: they have driver library files for x86-64 and aarch64, but they don't have armhf driver libs for software running under box86. That is, box86 converts 32-bit Intel into 32-bit ARM, not into 64-bit ARM. For i386 games, you'd likely need to use an AMD GPU -- Polaris (RX5xx) or older. One game I find very useful for checking the performance of GPUs on ARM is Veloren. It uses Metal on MacOS, Vulkan on Linux, and Vulkan or DX12 on Windows (though there's no ARM Windows build).
This is so fucking sick man, I love the development that ARM desktop / server cores have been making! I know we have other Architectures as well (RISC-V) and it's awesome that they're all making strides, but to see this amount of progress now? Fuck yeah! I remember watching your older videos where you literally couldn't detect the GPU or even push anything out to the frame buffer, but now look at it :D
The only downside of Microcenter is that Brentwood Prominade is a post apocalyptic hellscape of a parking lot. The only saving grace is that there is a super secret way there that allows you to skip the bulk of the minivan wars.
Finally someone who prespreads their thermal compound! 😃 I've always just seen so many people just leave it to squish itself but I learned from my dad who touched computers for 20+ years that prespreading is better.
I admire it so much that you are able to work around such unusual circumstances. I can't even get a Linux graphics driver fully working in an ideal setup.
@@robkam643400 I understand why you would want to buy AMD gpu for linux, but what's the point of swapping Intel CPU for AMD one?(unless you mean Intel ME, but it's works the same with with Windows)
Hey Jeff, running the bedrock edition, especially the mobile version as you did, is a far too easy challenge for your rig. I suggest running and comparing the latest java version and a specific modded version: Faboulously Optimized. To get any architecture incompatibilities out of the way, consider using a launcher that comes as a JAR file, such as the Technic launcher Make sure to use the latest jre (20-21) and set the proper JVM flags Additional bonuses: shaders, resource pack with parallax mapping + physics mod pro (then grab a fire extinguisher) Looking forward to hearing from you!
1:47 LOVE the "18 minute pickup" at Microcenter. I've built both of my kids' gaming towers by picking out the parts, hitting "buy" and driving right over. Even picked up Dell XPS 13s for each of them the same way.
Really cool. Btw Jeff I found a way easier way to connect LTE modems via, Modem Manager and Network Manager. No need to install QMI libraries, they're already in Debian 12.
Nice. They really make amazing stuff. Too bad I can't afford it. Would love an Ampere workstation so much. But I'm happy with RK3588 and my pc when I need it.
There's something poetic about having a build this absurdly overpowered and being able to play minecraft and not much else. Hopefully ARM will get some love due to the copilot + thing (however horrible it might be).
11:00 I think this has been a problem in Cinebench since inception, originally it was only an issue for very niche 4 and 8 socket systems, but with EPYC, Threadripper and Xeon Platium (cascade lake) with up to 56-64 cores per socket and 2-4 sockets, many cores started going un-used in and after 2019
@@lucasrem we're not talking about perfromance, only core count, IIRC when i built my 12 core dual processor Xeon X5690 desktop the current-at-the-time version of cinebench only supported 16 threads, not 24
It sounds like the real bottleneck here is DDR5 support. Which is supported by the upcoming Ampere revision. Which is even faster. This is a surprisingly effective workstation for a development board, and further software support should improve it even further. I could see Blackmagic integrating one with a pile of their PCIe cards to build a behemoth video switching workstation capable of real-time effects - and driver support is a lot easier when you make the cards!
At 2:43 Ubuntu and Windows for ARM.... Did you try any other Linux distro?? Just curious on that.... I have been coming down here to suggest ChimeraOS because it runs steam very well, but then I remembered it may not have an ARM flavor....if it does that might be a good way to go!! Manjaro apparently has the ability to act like SteamOS since both of them are based on ARC Linux.... Hope you have an excellent day!!
Upgraded my laptop's monitor to 4K and with 100% scaling I can read the text on your screen at 0:55 With any other scaling the text becomes more blurry and if I right click on the video and click stats for nerds, the resolution of the viewport changes with the scaling. Also, on my win 10 laptop I can't just hover over the speaker icon in the taskbar and scroll to change the volume, which win 11 does. Sorry for the unrelated comment but hey, good to see you are doing well and are in good health!
Nice video! Almost getting one myself! Is it the 2,8Ghz version of the CPU Ampere will ship? Regarding the Mac, let’s not forget M2 Max and M3 Max have tremendous memory bandwidth, 400GB/s.. quite much more so than a DDR4 system I believe. That makes them maybe faster in memory bandwidth limited problems, such as several types of simulations etc with low flops per byte ratio. AmpereOne has the DDR5 memory system support. However, I have not seen it easily available like this CPU is. With only 3 out of 4 memory channels being connected, maybe the 96 core version is a “better fit” as the amount of bandwidth per core will be quite better, for anything bandwith sensitive that is.
1.3TFLOPS is at least double what the RTX 4070 Ti can do. The CPU can access much more memory with lower latency so there's no comparison. "Ti" is an abbreviation of "Titan" so it's pronounced like the first syllable. Titan never made sense anyways because the Titans lost to the Olympians, so Ti was just a face-saving compromise. The company is still named after one of the seven deadly sins, which shows that they can't let go of something that sounds cool
Are you sure you are not just citing the effect of the abysmal FP64 performance of the card? A 4070 Ti really ought to be able to do much better than 1.3 in single-precision aka. FP32. The CPU would also be faster but only by a factor of 2x, but I would expect at least ~6 TFLOPS from the 4070 Ti in FP32.
@@TheBackyardChemist Compute TFLOPS is traditionally FP64. Ada has a 1:64 FP32/64 ratio so it's around 40 TFLOPS FP32 on the GPU. You could emulate 64 bit math and get the ratio down to 1:4 but it's not IEEE compliant. CPUs should be 1:2 but they run into power limits and drop the clock speed if it's too much work. Of course these are all peak theoretical figures, branching code and sparse access won't allow the GPU to reach its maximum performance
Microcenter would make an absolute killing in my area, but they'll never consider opening a store in Alaska. There only competition in my area would be Walmart, and a 45 minute drive to Best Buy.
You are very wrong about Blender - all you had to do was install compatibility layer from MS store lol. i use Blender a lot on Windows on Arm devices, if you can do that please retest the results for future video. sinsirely yours, Greg
I just saw this, but Minecraft Java runs on ARM, with shader support-just use Prism Launcher, and you’ll need to install a specific ARM JDK version for the version of Minecraft you intend to run. The entire process is identical to using Prism Launcher in any OS on any CPU, and there are many guides on how to do it. I highly recommend when you create an instance, rather than choosing a vanilla game, go straight to the mods page, and search for and select “fabulously optimized” from the mod search menu. Then you’ll want to install Iris, and then go to the resource packs tab and search for shaders
Amazing! I would totally have an Arm CPU in a desktop! Imagine the possibilities. Unfortunately, I don't think it will be so easy access ($), maybe in 15 to 20 years?
Wow, RU-vid actually ate my comment. Firefox wasn't using the GPU for Video decoding, as the NVIDIA drivers don't implement VA-API. To get it to work you'd need to use my nvidia-vaapi-driver.
Ah. So the CPU is just good enough to eat throw 4K then? Sheesh. It's not much faster than a midrange SBC... And shame on RU-vid for eating your comments. I'm always happy to have authentic elFarto comments!
@@JeffGeerling Very cool, I'll be watching for it. I almost bought one of these systems back when you showcased it but now that I've also become one of the masses without a job I'll hang onto the pennies and live vicariously through you!
Could you try spinning up hundreds or (thousands?!?) of docker containers with Kubernetes? With all those CPU cores it's gotta be really fast to ramp up the instances.
Thats what its in part for. However like Jeff was saying, you do run into memory bandwidth limitations, meaning that per core you can't expect linear performance curve based on the number of cores you have. I have an expectation that a lot of customers who are running off the shelf applications will probably benefit more from the lower core sku but if you design your application around the server the 128 core will probably be worth it.
@@MegabeanI used to have a workload that was shared-nothing, buffered data by thread, was computationally heavy, had fairly small unit sizes of data (commonly
@@thewiirocks Thats cool, sadly not enough background to fully understand. I do 3D rendering though, I use a java application called Chunky, it's a voxal type renderer (might be using the wrong term). It does photorealistic rendering. I've been able to saturate my server with it, with 64 cores and 128 threads. Idk how much memory plays into it, outside it as using every bit of memory you allocate to it.
@@Megabeanthe biggest thing you need to consider for memory is how long you're keeping data in the L1 and L2 caches, and whether or not you're unnecessarily evicting data and then asking for it back. A very common pattern in modern software is to perform one operation at a time (e.g. an addition of two values) across a large collection of data, looping over the data separately for each operation. This is _terrible_ for the cache as the CPU is forced to evict each record to make room for the next record in the collection, thereby reducing your throughput to the memory bandwidth and making your caches useless. This can be hard to detect as test data sets tend to be small enough to fit within L3 and therefore exceed memory bandwidth. It's only once the data sizes are scaled that the true limits of the memory bandwidth are hit. Worse yet, the CPU will look busy to the operating system even though it's spending most of its time doing nothing. What you really want to do is to bring in a record of data, perform all operations you possible can on it, then be done with it for that computational cycle. That maximizes the amount of time a record can be held in CPU caches. If done correctly you may be able to operate entirely out of the L1 cache, which can easily provide an order of magnitude performance improvement.
The only problem with Microcenter is the lack of Microcenter; many folks are still stuck driving 3 hours plus to get to one, which is a hard sell over sitting on our bums at home and ordering like *snap*. Hopefully they ramp up their pace of opening new stores across the country, but only time will tell.
@@lucasrem I despise the hustle and bustle of big city life. Less stress, better air quality, big yards (some have creeks, woods, etc. for hunting, fishing, recreation, gardening/farming, and so on), and in our area, a local telco invested in fiber-to-the-home, so tech life is still good; doesn't take much longer these days for etail shipping to these areas. I actually live in a city of about 40,000. This provides the benefits of nearby shopping, food, jobs, etc. without being a monstrosity. There are still plenty of big cities across the U.S. that don't have a Microcenter; I still stand by may statement that I'd like to see them expand more quickly. Even places like Austin, TX -- "the new silicon valley" (but they call it silicon hills) -- doesn't have a Microcenter and requires driving 4 hours to Dallas or 3 hours to Houston to get to one.
@@DeltaSierra426 expanding too quickly can wreck a business. Perhaps they need to set up a good shipping process that beats Amazon, such as the ability to schedule the delivery in a 2 hour time window. I'm in Canada, never been into one of their stores.
If you used the black plastic part of the ram packaging to spread the thermal paste then you did it wrong. You're supposed to use the *clear* part of the package!
to put your Cinebench 24 score into X86 context, a Intel i9 13900KS at 5.6ghz scores 2379 Multi and 142 single while a AMD 7950X3D at 4.5ghz scores 1829 multi and 111 single single core is obviusly lacking on that 128 core, but the multicore for sure aint
It's one of those "well one ain't good enough, let's just throw ALL THE CORES in there" problems :) I really want to see the single core specs on AmpereOne. Or see Apple create a 128 core monster M2 Ultra Ultra Supreme :D
@@JeffGeerling The upcoming revision supports DDR5. Assuming twice the memory bandwidth and adequate driver support, perhaps a 4K+ Cinebench score is in the cards?
Complaining that you can't use a dedicated/separate GPU with Apple Silicon but being OK with the fact that you can't use the same dedicated/separate GPU with Arm Windows is some pretty fun mental gymnastics. Seems both Apple and Windows can't support pci gpu's on arm, but both CAN on x86... wow its almost like they're similar or something.
Fixed the video title for you: "Move over, Mac Pro! I successfully did nothing on this new machine." These new Ampere machines look great for power-efficient servers and budget dev machines. But it looks like that's it for the moment -- mostly due to driver issues. The hardware seems great, but the software isn't ready for consumers yet. At this rate, I wouldn't be surprised to see another paradigm shift take place before that happens.
Bandwidth not keeping up with compute power has long been an issue. One amusing statistic is that standard floppy disks are faster than a typical NVMe drive (compared to capacity). You can read a 1440k floppy in about 45 seconds but a Samsung 990 PRO 2TB will take over 4 and a half minutes. Even the IOPS per megabyte is a bit faster on the floppy. With a slow step rate of 8ms you'd have a worst case of 840ms access time or .82 IOPS/MB. The 980 is 1.4 million IOPS best case which comes out to 0.7 IOPS/MB.
Minecraft Java edition for linux (through an unofficial launcher) should run well on this beast (on the Nintendo Switch with Linux installed it is playable)
The key difference of M2 ultra is that 192 GB ram is also VRAM which makes it possible to run some 180B LLMs on it which is not possible for other consumer level PC.
If they use 192gb of vram what cpu will use air or water? I m prety confident that those workloads it wont be run on mac mostly. There s no way that in RL vram can access 192gb in a mac. I qoute what a teacher of a famous university said. " the more advanced is your jobs the less mac you will see"
Hello. How about using it as a web server and virtualization (ESXi and windows server with hyper-v)? Can you do some tests for these? and maybe compare with some xeon processors? how fast mysql is on those cpus? I really look at these ARM CPUs and i see they might change the servers world and i really think of getting an ARM server.
I have just read a 3 years old post in reddit writing that open source AMD GPU drivers can be used in ARM Linux (Oland GPU with blobs for initialization). It is a shame that open source drivers cannot be compiled if needed, and that games are not compiled for Linux (x86 and ARM) and Vulkan. And it was a pleasure watching that it is possible as with Super Tux Kart. It seems Nvidia and ARM are making SoC for laptops and handhelds, for MS WOS, Chrome OS and Linux perhaps future good drivers will come with them.
I would try it on Fedora, which has vanilla and almost edge Linux kernel, plus they have proper nvidia support with wayland now, and with its special ram config (which requires no swap now). Things might run better. And I always use the flatpak version of steam, runs quite good
I'm really wanting on arm to finally come to the desktop scene, especially for gaming and some lite office work or programming. I switched from x86 laptop to base model m1 mba and I'm so impressed with the power efficiency and battery life, got me through a huge machine learning project. Too bad apple hates their users and we'll have to wait for someone with more sense to come to desktop market with arm computers that can compete with Apple Silicon
Every Mac today is essentially a closed box. No upgrades, everything external with limitations. Wires everywhere, no real flexibility left. Mind you the OS is great.
Thanks for sharing this with us, Jeff! I wasn't really aware just how compatible things were with ARM on Linux. I have to admit though, the LLM performance was actually rather poor. A used RTX 3090 (maybe $750?) could run that llama 2 13b model at 10x the inference speed. I'll be very interested once the GPU support with ARM is worked out as that seems like the main issue.
12:19 one of the problems you've likely ran into with Valve and halo games is anti-cheat, both VAC and Easy DO NOT like emulation, however WINE seems to have been made compatible lately, u guess to support the Steamdeck, which doesnt use emulation, just the proton translation layer, which the predecessor used to get you banned from CS:GO if i recall.
With more companies developing ARM, RiscV maturing and even intel wanting to reconfigure x86-64 to drop a lot of legacy ISA, even 32 bit... we're about to get into one of those crazy time period. I think windows current state on ARM shows the power of the open source ommunity, in how a lot of Linux maturity happened in the community for ARM (and RiscV.) Wonder if they will wake up and open source NT ;-) Technically IBM could as they still are partial owner of NT.
I'm super enthusiastic about the Snapdragon X Elite CPU for a development laptop with good battery performance. Apple performance without the Apple tax, and while Asahi Linux is a great idea, hopefully there will be no guessing for the device drivers.
Also that silent Steam game death is a classic Visual C redist. bug. It was happening to me like crazy on my brand new Windows laptop, because there was a bug making the games think the various libraries were installed when they weren't, so their first-run script that checks for the library and installs it if it's missing, would report success, but then the game would try to launch, try to use the missing library, and fail. I bet something similar happens related to incompatibilities between Linux, ARM, and MSVC
Could you run virtual machines in this hardware? Could QEMU/KVM emulate raspberry pi, Mac OS or even X86 os? Imagine running a virtual cluster of pi? And while quickemu can run MacOS, running the latest apple sillicone version could be very useful.
The other problem with many CPU cores, even if you have 12 channels, is that if pieces of data that all 128 cores need are on the same stick, then you effectively only have one channel of RAM so far as this is concerned. Now if you're running many VMs or containers where they have little or no memory shared between VMs, this problem doesn't occur. (For an example, consider a parallel computation that requires less than 64GB RAM all in, and imagine that is all stored on a single stick of RAM: you'd get horrendous contention.
Chalisque What you run on ARM, VMs, lager data, many users ? Optimization ! Where is the bottleneck ! consider a parallel computation that requires less than 64GB RAM, rewrite the code !