Really like the layout of these new nodes. As you already alluded to, cabling the back properly, including Network, and OOB, is essential to preventing a nightmare scenario later on of a down node that you can't remove because a live node is cabled over top of it! Also really like that they're using HDS Connectors now on the sleds. The gold fingers of the previous generation were way more difficult to line up (and we've seen at least one node short out when installing, taking out the whole block).
I'm really not sure why, but swapping a node or a blade in a blade server might be one of the most satisfying things, you could do in a data Center. There is just Something about sliding something this big in a system and it fits just perfectly
Someone needs to get that power supply to Dave from EEVBlog, would love to see a teardown and analysis of it. 12V 216A in that skinny little thing is awesome.
This is an awesome unit. I've been looking at these types in the past 1 month. Thanks for this nice timely video. It would have been nice if you fired one up to give a sense of acoustics. Also, it would have been nice to address the cooling issue regarding higher TDP that Supermicro warns about. Dell has a similar system but offers liquid cooling as an option.
For me the AMD version of this platform makes a lot more sense. I work in the 3rd world where datacenter rack space and power is a lot more expensive. We cant waste rack space on inefficient Intel processors. 64 Core Epyc more power efficient and has better performance. There are so few general purpose workloads, where Intel can compete on density. Now if they crammed their socket full of those new Alder lake E-Cores and a handful of P-Cores, then they will have my attention.
I think I prefer the Lenovo d2 chassis. Firstly the nodes are removed from the front. You have no idea how hard removing a fully cabled up sled out the back is. Let's just say it is a right pain in the backside. With the SD530's I can just slide them out the front and not have to unplug anything. Second you only need to make one network connection to the d2 chassis to connect up all four BMC's on the nodes. That is very handy and saves on a whole bunch of cabling and management switch ports and also less cables means better airflow around the back of the system. It also gives a single point of management for all the shared features such as fans and power supplies.
I can tell you I 100% know about removing fully cabled sleds. We have about 30U of 2U4N chassis since it is an easy way to have many CPUs online for testing. Key is bundling cables just outside of the node for each node, then bundling once it hits the side of the rack and goes to switches. That small change makes it relatively easy to remove this style of node. The shared chassis management is nice, but whenever I publish it (concerning other vendors) I will have a piece about something that should raise eyebrows about the chassis management features out there.
@@ServeTheHomeVideo I really really should take a picture of the back of one of our racks. Thankfully it is Lenovo d2, there is absolutely zero chance of bundling the necessary cables to the side. It does not help that DAC 100Gbps cables are as thick as your finger.
Agreed. However, the D2 does have the serious annoyance of having to pull the entire rear assembly (including PS's) to get at the card slots. And the chassis is HUGE @ nearly 4' long. To get the rear module out, I have to partly remove the chassis from the rack. It took almost a year before I could add cards and two more nodes to one because I can't turn the thing off. It was done when the chassis was being moved, which made it much easier to pull everything apart on the workbench vs. in the rack. (30 cables... 24 DAC, 2 Cat6, 4 power) No amount of neatness can make it easy to pull nodes out the back.
@@jfbeam Homer Simpson moment coming right up. You do realize that the PCI cards can be removed from the rear ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-o_bjkU6aFvM.html
@Patrick Any idea when they're going to put E1.S or E3 on these units? The Yosemite v3 spec with 4x E1.S per node makes a lot of sense, since you would rarely mess with the I/O, and more likely to replace a failed node or drive, however Yosemite is 4u and this is 2u.... decisions, decisions
Intel has these 30TB Gen 4 drives; the long ones. You technically could fit 64 of these in 2U Big Twin. That's 16 per node. In this unit, you have 128 PCIE lanes per node: 48 for pcie slots, 8 for internal boot, 24 for current NVME drives. That leaves another 48 lanes so you technically could have 16 E.1 drives per node (direct connect) and still have some lanes to play with.
@@ServeTheHomeVideo Yea exactly like that! Did they give you an indication when they were going to launch that model the last time you visited Supermicro's booth? (that announcement was 2 years ago, lol 😂) - There's no trace of it anywhere in the catalogue or website.
Can't imagine the nodes are hot swappable seeing that the nodes only eject from the back, you would need an open back cabinet and that is only if the hardware supports it. But if they were these would be great for redundancy, minimizing or outright eliminating downtime.
They are designed so that the chassis can be running and new nodes just plugged in. You can take a node out, put it on the cart waiting for service, immediately put a new node back in. It will power on and connect to the drives on the front of the system and be up and running again. It is super cool and makes service way easier when you have a bunch of these.
@@ServeTheHomeVideo Okay that's cool would be great for a homelab or maybe a small business with smaller accessible cabinets, though I have to wonder though if there is a more effective way to make everything more accessible; I mean unless you have an open back cabinet wouldn't you need to pull out the whole chassis to access the back to replace the nodes (forgive my ignorance if I'm missing something, I'm new to the whole homelab scene).
@@ServeTheHomeVideo Except most software these days detects the physical hardware on which it's running, so effectively putting the drives from one system into another creates a mess. (try that with ESXi...)
Indeed. Few design their racks for work in the hot isle. The last computer room I built had only 2' behind the racks. (4' infront) I only had so much room to work with. And nothing ever goes in from the back. (tiny switches, maybe. the larger fabric switches slide in backwards from the front!)
@@jfbeam Maybe a design that has all swappable parts on one side, and things like fans or radiators in the back. Still have to admit all that modularity in a 2U form factor is pretty impressive though if I had to put my thought on design out, maybe make it a 3U and have the drive bays on top of the 2U segment, and have ventilation on top that way combined thermal runoff from disks and components can more easily escape through convection.
@@ServeTheHomeVideo I found it. The only sad thing is this on the AMD model: 6 Hot-swap 2.5" drive bays 4 NVMe [Gen3] Compared to 6 NVMe Gen4 for the Intel version.
That is the standard heatsink retention design for 3rd gen Xeon Scalable. Both Cooper and Ice Lake versions. Next gen Sapphire Rapids also uses these. So the entire industry is using them
The node connector has 15 x 16 pins, are they all used? considering power demands that does not leave that many left for actual I/O ... great review, thanks!
I'm confused. The still image for the video says 288 cores, but in the video itself, you mentioned that it can go up to 40 cores per CPU, which means upto 80 cores per node or 320 cores per chassis. (40 cores per socket * 2 sockets per node * 4 nodes per chassis = 320 cores)
How is this product different than a "regular" blade enclosure with 4 blades? In this day and age of virtualization, aren't blade systems a thing of the past?
Two main differences. First, this is 2U. Most blade enclosures are larger. Second, this does not have the integrated switching that you normally find in data centers. 2U4N is very popular in virtualization farms.
@@ServeTheHomeVideo It is also highly popular in HPC too I should know have lots and lots of them at work. My beef with them is the C20 power sockets required because some backward countries have 110VAC mains supplies. In more sensible countries with 230VAC a standard C14 socket is able to deliver enough power.
@@jonathanbuzzard1376 The power supplies in the one reviewed here don't support 120V, they are specified for 208V to 240V supplies. They still need the C20 inlets to cover their power requirements while complying with the IEC current ratings for the connectors (for reasons I don't fully understand UL will approve connectors at higher than their IEC ratings, but that only applies to north America).
@Yaron VPX was last updated in 2019. You can get 12+ blades in a 3U, with 6U depending upon whether the chassis takes large or small boards. Some other differences are 768 watts per board and a much larger board; which can hold more components. Such systems are popular on larger aircraft, with sky's the limit pricing. 3U and 6U are also used on the ground.