Тёмный

Dell's 2U Flagship: The AMD EPYC-powered PowerEdge R7525 

ServeTheHome
Подписаться 703 тыс.
Просмотров 33 тыс.
50% 1

Опубликовано:

 

16 сен 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 96   
@squelchedotter
@squelchedotter 3 года назад
If anyone ever makes a compilation of patrick introducing model numbers for 10 minutes straight I'd absolutely watch it
@hrobayo1980
@hrobayo1980 3 года назад
It will be awesome... :D
@kellymoses8566
@kellymoses8566 3 года назад
Licensing SQL Server Enterprise for all cores on this would cost a cool $912,384
@gamebrigada2
@gamebrigada2 3 года назад
I've been buying R7525's pretty much as soon as Dell had a spec sheet to order from. I love these things. One thing that blew my mind was that the board absolutely takes advantage of all of those pcie lanes. Dell simply made everything modular because they could. Even the idrac module pops into a pretty standard x8 slot. I really hope they keep this general design for a long time.
@UpcraftConsulting
@UpcraftConsulting 3 года назад
I think Dell might be warming up to AMD on enterprise. I saw they are bringing this platform to some of their turn key enterprise solutions like vxrail which was always one of their most conservative platforms for updates. Just set one of these R7525 boxes up last week. Unfortunately I had it drop shipped to datacenter and I did not get to "play" with it so the video is nice to see what it actually looked like inside. Got 16 core model to minimize licensing costs as it's small business so not a ton of users. 1 box has replaced 4 old r620 machines with equallogic shared storage. Taking the space from 6u down to 2u with higher performance overall. It is customer owned space, but if they were paying for hosted datacenter it would be a good deal cheaper on rack space alone. I liked the OCP 3 option. We put in dual 25Gbit but only using 10Gig dac for now but it was basically the same price as 10Gig nic when dell discounted these options so why not get the faster version just in case. (Sometimes the discounts depend on availability so I guess 25gig cards had lots of stock on the shelf)
@dandocherty248
@dandocherty248 2 года назад
I bought this server for our company in Fremont got it with 2 24 core AMD 24 core CPU 512gb DDR4 ram and 24 4tb drives its very powerful
@qupada42
@qupada42 3 года назад
Loving the comment about the bezels when buying in quantity. I honestly don't think I've ever purchased a bezel intentionally, only when I forgot to remove it from the BOM. One thing you didn't mention that I've found interesting in this generation (the 1U R6525 is the same) is the PSU layout; one on each side of the chassis (rather than the more traditional both on one side) could be either a blessing or a curse depending on where you've installed the PDUs in your datacentre. Almost certainly makes for better airflow/cooling with less in the way behind the CPUs though, especially with these 200W+ SKUs. As for the CPUs, there are some fun performance characteristics for certain heavy workloads. The one that's made the biggest difference to ours is the number of cores per CCX, which affects how many cores are sharing each 16MB block of the L3 cache. The headline-grabbing 64 core parts are great and all, but they all have 4 cores per cache block, which doesn't translate into the real-world performance you're really looking for. The true standout performers are the ones with 2 cores sharing each L3 cache (7F72, 7532, 7302) or with the full 16MB for each core (7F52).
@droknron
@droknron 3 года назад
The channel is really flying Patrick. Congratulations on your success and the hard work paying off! :)
@ServeTheHomeVideo
@ServeTheHomeVideo 3 года назад
Thanks!
@hugevibez
@hugevibez 3 года назад
Never thought I'd see the day id see 🤤 quality b-roll of rack servers
@Non-Fungible-Gangsta
@Non-Fungible-Gangsta 3 года назад
that camera quality is so good.
@juergenhaessinger4518
@juergenhaessinger4518 3 года назад
This is amazing. I wish I could afford to put this in my home lab.
@Diegor35
@Diegor35 3 года назад
Me too man
@KillaBitz
@KillaBitz 3 года назад
Perfect for a plex server. Just add a Quadro RTX6000 and a boot usb. 8k Transcode beast!!
3 года назад
They'll be going in the skip in 4-5 years time.
@majstealth
@majstealth 3 года назад
@ to be replaced by servers that could go up to 16tb of ram, not that any sane man would need that in the next 5 years last server we deployed was single cpu 128gb, still way too much for the client these massive beasts are only really usefull for a handfull of applications, and for these its good they exist, but bread and butter are still smaller ones.
@VigneshBalasubramaniam
@VigneshBalasubramaniam 3 года назад
Many customers are asking for firmware from OEMs that don't blow the PSB fuses in EPYC CPUs. Hopefully more enterprise customers ask for it.
@dandocherty2927
@dandocherty2927 2 года назад
I got this server running 2 24 core 3rd gen cpu with 7 out of 24 3.85tb ssd drives 1tb nvme raid for vmware esxi 7 boot os 512gb of ddr4 3200mhz. Several 10gb nic cards this thing is crazy powerful love it
@ystebadvonschlegel3295
@ystebadvonschlegel3295 3 года назад
05:21 $73,218.42 (after $43,516 “discount”)! Was having fun until that moment.
@ServeTheHomeVideo
@ServeTheHomeVideo 3 года назад
A huge cost driver are the 24x NVMe SSDs plus the ultra high-end CPUs. These start at much lower prices :)
@wmopp9100
@wmopp9100 3 года назад
@@ServeTheHomeVideo drives are always one of the biggest cost drivers (RAM being the other one)
@jfbeam
@jfbeam 3 года назад
Starting at 2000$ - lol. (and then add a pair of 7000$ processors, and 20,000$ of RAM.)
@VraccasVII
@VraccasVII 3 года назад
I'm so happy that you guys have a youtube channel, amazing how I didn't find this sooner
@dupajasio4801
@dupajasio4801 3 года назад
I'll be buying few of those soon. Excellent timing and info. So many config options.... And yes, VMWare licensing is playing huge part of making decision. Thx
@hariranormal5584
@hariranormal5584 3 года назад
donate me a server.
@jeyendeoso
@jeyendeoso 3 года назад
This is Patrick the new CEO of Intel, according to Dr. Ian Cutress? hahaha
@ServeTheHomeVideo
@ServeTheHomeVideo 3 года назад
Ha! TTP!
@xXfzmusicXx
@xXfzmusicXx 3 года назад
Looks like there is 1 PCIE x1 slot on the board
@philsheppard532
@philsheppard532 3 года назад
I saw 2 one right center one left against a wall.
@NTipton90
@NTipton90 3 года назад
Just got one of these at work! Super excited!!!
@JMurph2015
@JMurph2015 3 года назад
Like everyone else is saying, well played to Dell for the serious commitment to modularity on this one.
@berndeckenfels
@berndeckenfels 3 года назад
Yes great feature (however a bit limited if you consider the biggest module beeing the iDRAC). 6 OCP slots or something would be a monster (2 x 2 50gb, 2 HBAs, Hotplug SATA DOM, iDRAC) would be good for HCI Servers
@berndeckenfels
@berndeckenfels 3 года назад
I still don’t see what is Great for Security to lock the CPUs.. maybe that would be worth an interview with dell/amd?
@ServeTheHomeVideo
@ServeTheHomeVideo 3 года назад
We covered this a bit in the AMD PSB article/ video
@berndeckenfels
@berndeckenfels 3 года назад
@@ServeTheHomeVideo yes you tried to ,)
@kwinzman
@kwinzman 3 года назад
Let me optionally turn off the CPU fuse blowing if I don't need that "security" feature.
@Real_Tim_S
@Real_Tim_S 3 года назад
What an awesome design concept!! If only the PCIe riser card port(s) was standard this would be the ultimate design. At least there are several ports of Slimline-8i that are more or less industry standard. @ServeTheHome, pass along to Dell they should just open source the riser connector and let the market build cards and adapters for it. I could see several uses for repackaging this motherboard if all the ports were made available, as an example it's the perfect platform concept for real-time CNC machine processing engines with >10 axis... With pin-out and connector part numbers available I wouldn't see any faults to this platform - which given the last decade or so of stagnant proprietary system designs is actually a huge accolade.
@ServeTheHomeVideo
@ServeTheHomeVideo 3 года назад
I actually want to do away with standard x16 slots and move to everything cabled. That gives even more flexibility. I was told by a few folks in the industry that doing so costs so much more it is prohibitive at Gen4 speeds.
@Real_Tim_S
@Real_Tim_S 3 года назад
@@ServeTheHomeVideo I agree, cabled results in flexibility that form factors inhibit. Clarifying question, "... I was told by a few folks in the industry that doing so costs so much more..." Is this "cables are more expensive" or "PCIe x16 connectors and the huge routing congestion and space claim for the conventional PCI add-in-card form factor is more expensive"? For cable speeds, I imagine is an economy of scale question we're seeing 200Gbe QSFP and 400Gbe (QFSP-DD) copper speeds at line rates of 50Gbps - even PCIe Gen5 should be possible over similar cable materials (and then there's fiber...). This Dell MB basically gets to what I believe is the optimal layout - direct fan-out of parallel memory and power to the shortest path, then fan-out of highspeed lanes to the shortest path via a cable connector. It's kind of approaching a SoC/RaspberryPi like mentality, of "get this IO off my tuned chip IO as fast as possible so that anyone can design an outer application system. A concept which I find a LOT of appeal with.
@MarkRose1337
@MarkRose1337 3 года назад
This fits well into your thesis of SATA being dead in the data center. I like the modularity of this system. I think it will last as long as PCI 4 is relevant. Server of the future indeed. Watching this drinking tea from my STH mug!
@ServeTheHomeVideo
@ServeTheHomeVideo 3 года назад
Tea sounds like an excellent idea before filming the next video this afternoon! Thanks for your support.
@mdd1963
@mdd1963 3 года назад
~38 TB total (24 x 1.6 TB NVME drives as shown) is hardly a bulk data storage bonanza/breakthrough...
@ServeTheHomeVideo
@ServeTheHomeVideo 3 года назад
@@mdd1963 We often test with lower capacity drives just due to the constraints we have. There are plenty of larger capacity 2.5" options.
@berndeckenfels
@berndeckenfels 3 года назад
@@mdd1963 24 x 8tb Tier 0.5 makes great hyperconverged storage nodes (however for storage alone the 2U is a bit wasteful, but with a Hypervisor node it is not too bad especially with no pcie switches needed for all disks). It’s of course not a good nearline option or nas shelf.
@Owenzzz777
@Owenzzz777 3 года назад
@@mdd1963 if you want to, there are 15 TB NVMe SSDs or 30 TB SAS SSDs out there. If you are really crazy, there are 100 TB SATA SSDs. If money is no object, you can actually build larger capacity storage servers with SSDs
@wilhelmschonfeldt5506
@wilhelmschonfeldt5506 3 года назад
Thanks for the great video. We are actually looking at these servers as high speed NVME storage systems. That we be served up using datacore. Just for clarification the system is able to take risers even when the chassis is the 24 disk nvme backplane? Also given that the interface is U.2 would it be able to take normal ssds? Or would the mixed sata/sas + 8nvme option be the better bet?
@ServeTheHomeVideo
@ServeTheHomeVideo 3 года назад
Riser slots are still available. Only 96 of 160 lanes in this configuration are used by the NVMe SSDs. I would look for a mixed backplane if not going all NVMe. If you do not need all 160x lanes you can do 128 lanes and get the extra inter-socket link for more bandwidth.
@tjmarx
@tjmarx 3 года назад
Wait, AMD is doing vendor locking? This is a real problem. If it was user controllable, so I could choose to unlock it at will, that would be a security feature. That I have no control over it whatsoever makes it a OEM protect.
@MarkRose1337
@MarkRose1337 3 года назад
It's to prevent firmware tampering, something you want on a server. It basically limits the CPU to booting code that has been signed by motherboard manufacturer. This does have the side effect of locking the CPU to that vendor.
@Real_Tim_S
@Real_Tim_S 3 года назад
It's a one way door. If the system's BIOS requests that the CPU blow fuses to accept initialization microcode signatures from one platform vendor's signature, the CPU will lock the signature of that vendor. That's where you want the switch to be, so Dell is who you want to shake your fist at, not AMD. Unlocking the CPU can't be user controlled by design, as userspace is not a trusted environment. Should someone attempt to install a rootkit and change the platform signature or initialization payload, the CPU will refuse to accept initialization code from the BIOS (machine won't boot). Think about that for a second - how would a CPU differentiate from an attack and a user-requested signing key reset? I feel your pain on this issue - I prefer LinuxBoot/Coreboot with hardware measurement I control than a proprietary BIOS (because I have trust issues with people who do software). Having this signing switch flipped before I would be able to put on my own firmware would require that Dell do the manufacturing test with a Dell-locked CPU and then ship an un-blown part separately (with no performance guarantees) for later installation, so that I could blow the fuses with my own signing key. Pretty sure that's not an option Dell has on their configurator...
@TheBackyardChemist
@TheBackyardChemist 3 года назад
@@Real_Tim_S They could have designed in a second efuse, which can be blown by the owner by shorting a jumper to permanently put the CPU into an unsecured mode and removing the vendor lock.
@demoniack81
@demoniack81 3 года назад
@@TheBackyardChemist Exactly, I understand why this feature exists but there is no reason why it shouldn't be made reversible if you have access to the hardware.
@tjmarx
@tjmarx 3 года назад
I understand what it's being marketed as @Mark Rose but there is no valid security reason to implement a feature like this without a way to turn it off. Not only to make it reversible, but also to stop the fuses from blowing in the first place if you don't need the security in your use case. This isn't a firmware lock btw, it's a code base lock that runs for firmware lines. That's significant because it has implications for the future. But mostly it's important to note that a security feature like this implemented securely would bind to the hardware not a vendor or a firmware line. One could attempt to make the argument that it doesn't need to be that secure because it's only trying to catch remote code execution. And that by extending it out to a vendor you give repair flexibility. And whilst that's true, it's also the case that because it's a remedy for remote attackers a system to engage and disengage the security feature through physical means renders the need for a vendor lock unnecessary In reality this is an implementation that takes a valid use case feature request from some of the highest purchasing customers, and implementing it in such a way that it actively works against the customer and for the OEM channel partner. I suspect this is how AMD get large OEMs on board with their platform pushing it to high end customers and locking them in. I suspect anti competition law suits over it by the end of the decade
@didjeramauk
@didjeramauk 3 года назад
Very interesting. Something to look at. Think it would be interesting to look at replacing say a 8 node hyper v cluster with fc attached storage array with 3 of theses and something like vSAN or s2d.
@chrisjorgensen1032
@chrisjorgensen1032 3 года назад
I have a bunch on Intel based R7x0 running esxi. I'd love to switch to the amd on the next refresh cycle, but mixing amd and Intel seems like it could be a headache.
@ChristianOhlendorffKnudsen
@ChristianOhlendorffKnudsen 3 года назад
160 PCI-E lanes? That's pretty crazy, but I suppose they need crazy throughput to support several 100G interfaces. Edit: Checked the numbers, 100G NICs will not be the major consumer of PCIe lanes, not when we're talking out of 160 lanes. If you deck out the server with ultra high throughput disks, they will eat a majority of the PCIe lanes, but, really, 160 Gen 4 lanes is a lot!
@hrobayo1980
@hrobayo1980 3 года назад
It will be awesome, if STH could review Gigabyte R182-Z93...
@creativestarfox
@creativestarfox 3 года назад
What would be a typical use case for such a expensive and high-end server?
@Mutation666
@Mutation666 3 года назад
How expensive are those cables tho ? I know my HBA pcie cables were pretty pricy
@ServeTheHomeVideo
@ServeTheHomeVideo 3 года назад
Dell likely gets better pricing than individuals on cables
@Mutation666
@Mutation666 3 года назад
@@ServeTheHomeVideo Yeah but looking when these go 2nd hand and you want to change the set up
@tommihommi1
@tommihommi1 3 года назад
Modularity turned up to 11
@joevining2603
@joevining2603 3 года назад
Maybe I'm the first to say it, and I might be going out on a limb here, but this seems like a very forward-looking design.
@johnmijo
@johnmijo 3 года назад
Because we ALL know that LED's make it GO FASTER :p
@ServeTheHomeVideo
@ServeTheHomeVideo 3 года назад
This server is lucky all of the photos/ b-roll was done before the latest crop of RGBWW panels arrived in the studio.
@johnmijo
@johnmijo 3 года назад
@@ServeTheHomeVideo . well I like to post this on the Hardware Unboxed and Gamers Nexus channels, as it is a bit of a meme about RGB ;) That being said I prefer a more industrial look with no bling, in fact I remember cutting the LED wires from some fans i mistakenly purchased from CompUSA, now there is a stroll down memory lane.
@MirkWoot
@MirkWoot 3 года назад
I am waiting for the giveaway ^^, .... holy christ id love to have this to play with... expensive toy tho. So interesting with the NVMe SSD bays, eventho thats not the newest about this server
@tdevosodense
@tdevosodense 3 года назад
I have worked as a it-support-tech many years ago, and at that point Dell was best used a doorstopper 😉
@Amogh-Dongre
@Amogh-Dongre 3 года назад
Take a shot everytime he says PCI-E
@mr.z5180
@mr.z5180 3 года назад
Did server can setup raid and get more IOP ? interest to buy it :)
@martinenglish6641
@martinenglish6641 3 года назад
Built more like a Main Frame. Good.
@webserververse5749
@webserververse5749 3 года назад
Why am I watching these videos when I know I can't afford new server hardware for my home lab?
@marouanebenderradji137
@marouanebenderradji137 3 года назад
can someone please explain to me what is a hyperscaler ??
@hariranormal5584
@hariranormal5584 3 года назад
can 7H12 Be used alone in 1 Socket servers?
@ServeTheHomeVideo
@ServeTheHomeVideo 3 года назад
Yes but they are not discounted like P series parts
@hariranormal5584
@hariranormal5584 3 года назад
@@ServeTheHomeVideo Odd because Geekbench only has Dual socket benchmarks of 7h12 ;p
@SimmanGodz
@SimmanGodz 3 года назад
Fuck that platform locking noise. Better security my ass.
@joealtona2532
@joealtona2532 3 года назад
Modularity is cool. But I'd prefer standard cross platform IO rather than proprietary Dell connectors. Also CPU locking is a bummer, no thanks Dell.
@AchwaqKhalid
@AchwaqKhalid 3 года назад
Sorry. *No PSB locking servers* for me nor our organization ❌
@berndeckenfels
@berndeckenfels 3 года назад
Man that iDRAC wastes more space than the MB ,)
@_MrSnrub
@_MrSnrub 3 года назад
Patrick, can i trade you my r720xd for this?
@berndeckenfels
@berndeckenfels 3 года назад
Dell don’t need screwless caddies since they don’t sell empty caddies ,)
@ServeTheHomeVideo
@ServeTheHomeVideo 3 года назад
Someone, somewhere has to do it.
@berndeckenfels
@berndeckenfels 3 года назад
@@ServeTheHomeVideo might be a 12 year old who is happy for not starving (bad humor attempt) or an robot planning world domination while working on the conveyer belt. BTW: isn’t it unlucky to put all 4 screws in, i always stop at 3 ,)
@BaMb1N079
@BaMb1N079 3 года назад
locked to dell? so it is not up to the customer that pays for the device, the maintenance and the support what he/she's gonna do with parts they own? what sick shit is that?
@2xKTfc
@2xKTfc 3 года назад
Geez, these toys are getting almost as spendy as The Signal Path's fancy toys!
@BR0KK85
@BR0KK85 3 года назад
Whats next Dell.... soldered on RAM:D
@ServeTheHomeVideo
@ServeTheHomeVideo 3 года назад
Likely the video after our next one (ETA later this week/ weekend) will have a Lenovo system with soldered DRAM
@BR0KK85
@BR0KK85 3 года назад
@@ServeTheHomeVideo i knew it .... everyone is dooing an apple these days.... Will watch the Video as soon as it hits yt
@manuelsuazo1125
@manuelsuazo1125 3 года назад
1k like, lol
@billymania11
@billymania11 3 года назад
These power hungry rack machines are lack dinosaurs compared to the Apple M1 technology.
@nilswegner2881
@nilswegner2881 3 года назад
No. Apple has got nothing to do with datacenters.
Далее
AMD's NEW Cheap Server Chip is FINALLY Here
18:41
Просмотров 102 тыс.
9월 15일 💙
1:23:23
Просмотров 1,1 млн
Over 1PB of Storage Dell EMC PowerEdge XE7100 Review
21:55
Supermicro 1U Ultra AMD EPYC 7003 Milan Server Review
24:16
Is this my Fault?
15:41
Просмотров 2,4 млн
Most Significant Server of 2020 from Wiwynn and Ampere
28:17
Don't Overlook This Slot
15:54
Просмотров 578 тыс.
9월 15일 💙
1:23:23
Просмотров 1,1 млн