Тёмный

High end 2U 4 Node Supermicro BigTwin 

ServeTheHome
Подписаться 734 тыс.
Просмотров 28 тыс.
50% 1

Опубликовано:

 

14 окт 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 60   
@codyweaver5721
@codyweaver5721 2 года назад
Really like the layout of these new nodes. As you already alluded to, cabling the back properly, including Network, and OOB, is essential to preventing a nightmare scenario later on of a down node that you can't remove because a live node is cabled over top of it! Also really like that they're using HDS Connectors now on the sleds. The gold fingers of the previous generation were way more difficult to line up (and we've seen at least one node short out when installing, taking out the whole block).
@p3chv0gel22
@p3chv0gel22 2 года назад
I'm really not sure why, but swapping a node or a blade in a blade server might be one of the most satisfying things, you could do in a data Center. There is just Something about sliding something this big in a system and it fits just perfectly
@Gastell0
@Gastell0 2 года назад
The Sci-Fi feeling of working with a modular super computer today
@KameshSS
@KameshSS 2 года назад
Agreed. I do that with our Intel S2600 2U servers haha :P only during deployments.
@2xKTfc
@2xKTfc 2 года назад
And then the vendor screws up and sends you a unit that sticks out of the front by 1-2 inches. Aaah, the disorder!
@AaezI
@AaezI 2 года назад
Someone needs to get that power supply to Dave from EEVBlog, would love to see a teardown and analysis of it. 12V 216A in that skinny little thing is awesome.
@rahulprasad2318
@rahulprasad2318 2 года назад
Who?
@ServeTheHomeVideo
@ServeTheHomeVideo 2 года назад
Probably not a high priority since that is part of the BigTwin's secret sauce for Supermicro.
@TheAnoniemo
@TheAnoniemo 2 года назад
@@rahulprasad2318 DAVE FROM EEVBLOG
@MarekKnapek
@MarekKnapek 2 года назад
@@rahulprasad2318 He ment Danyk from Diode Gone Wild.
@shadowmist1246
@shadowmist1246 2 года назад
This is an awesome unit. I've been looking at these types in the past 1 month. Thanks for this nice timely video. It would have been nice if you fired one up to give a sense of acoustics. Also, it would have been nice to address the cooling issue regarding higher TDP that Supermicro warns about. Dell has a similar system but offers liquid cooling as an option.
@ServeTheHomeVideo
@ServeTheHomeVideo 2 года назад
Super loud. Good point on the cooling. Will did that in the main site article.
@ferdievanschalkwyk1669
@ferdievanschalkwyk1669 2 года назад
For me the AMD version of this platform makes a lot more sense. I work in the 3rd world where datacenter rack space and power is a lot more expensive. We cant waste rack space on inefficient Intel processors. 64 Core Epyc more power efficient and has better performance. There are so few general purpose workloads, where Intel can compete on density. Now if they crammed their socket full of those new Alder lake E-Cores and a handful of P-Cores, then they will have my attention.
@morosis82
@morosis82 2 года назад
They'll have to do it quickly, AMD is supposed to be doing that very thing for 128 cores per package sometime next year or early 2023 I think.
@jonathanbuzzard1376
@jonathanbuzzard1376 2 года назад
I think I prefer the Lenovo d2 chassis. Firstly the nodes are removed from the front. You have no idea how hard removing a fully cabled up sled out the back is. Let's just say it is a right pain in the backside. With the SD530's I can just slide them out the front and not have to unplug anything. Second you only need to make one network connection to the d2 chassis to connect up all four BMC's on the nodes. That is very handy and saves on a whole bunch of cabling and management switch ports and also less cables means better airflow around the back of the system. It also gives a single point of management for all the shared features such as fans and power supplies.
@ServeTheHomeVideo
@ServeTheHomeVideo 2 года назад
I can tell you I 100% know about removing fully cabled sleds. We have about 30U of 2U4N chassis since it is an easy way to have many CPUs online for testing. Key is bundling cables just outside of the node for each node, then bundling once it hits the side of the rack and goes to switches. That small change makes it relatively easy to remove this style of node. The shared chassis management is nice, but whenever I publish it (concerning other vendors) I will have a piece about something that should raise eyebrows about the chassis management features out there.
@jonathanbuzzard1376
@jonathanbuzzard1376 2 года назад
@@ServeTheHomeVideo I really really should take a picture of the back of one of our racks. Thankfully it is Lenovo d2, there is absolutely zero chance of bundling the necessary cables to the side. It does not help that DAC 100Gbps cables are as thick as your finger.
@jfbeam
@jfbeam 2 года назад
Agreed. However, the D2 does have the serious annoyance of having to pull the entire rear assembly (including PS's) to get at the card slots. And the chassis is HUGE @ nearly 4' long. To get the rear module out, I have to partly remove the chassis from the rack. It took almost a year before I could add cards and two more nodes to one because I can't turn the thing off. It was done when the chassis was being moved, which made it much easier to pull everything apart on the workbench vs. in the rack. (30 cables... 24 DAC, 2 Cat6, 4 power) No amount of neatness can make it easy to pull nodes out the back.
@jonathanbuzzard1376
@jonathanbuzzard1376 2 года назад
@@jfbeam Homer Simpson moment coming right up. You do realize that the PCI cards can be removed from the rear ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-o_bjkU6aFvM.html
@JonMasters
@JonMasters 2 года назад
Have an awesome day Patrick!
@ServeTheHomeVideo
@ServeTheHomeVideo 2 года назад
Congrats on the big marathon!
@Jormunguandr
@Jormunguandr 2 года назад
I love blade rack servers for clusters
@bryanv.2365
@bryanv.2365 2 года назад
@Patrick Any idea when they're going to put E1.S or E3 on these units? The Yosemite v3 spec with 4x E1.S per node makes a lot of sense, since you would rarely mess with the I/O, and more likely to replace a failed node or drive, however Yosemite is 4u and this is 2u.... decisions, decisions
@ServeTheHomeVideo
@ServeTheHomeVideo 2 года назад
Are you thinking about something like this? www.servethehome.com/supermicro-bigtwin-e1-s-edsff-edition-launched/
@shadowmist1246
@shadowmist1246 2 года назад
Intel has these 30TB Gen 4 drives; the long ones. You technically could fit 64 of these in 2U Big Twin. That's 16 per node. In this unit, you have 128 PCIE lanes per node: 48 for pcie slots, 8 for internal boot, 24 for current NVME drives. That leaves another 48 lanes so you technically could have 16 E.1 drives per node (direct connect) and still have some lanes to play with.
@shadowmist1246
@shadowmist1246 2 года назад
@@ServeTheHomeVideo Thanks for that link. I've not seen this before. That's great.
@bryanv.2365
@bryanv.2365 2 года назад
@@ServeTheHomeVideo Yea exactly like that! Did they give you an indication when they were going to launch that model the last time you visited Supermicro's booth? (that announcement was 2 years ago, lol 😂) - There's no trace of it anywhere in the catalogue or website.
@festro1000
@festro1000 2 года назад
Can't imagine the nodes are hot swappable seeing that the nodes only eject from the back, you would need an open back cabinet and that is only if the hardware supports it. But if they were these would be great for redundancy, minimizing or outright eliminating downtime.
@ServeTheHomeVideo
@ServeTheHomeVideo 2 года назад
They are designed so that the chassis can be running and new nodes just plugged in. You can take a node out, put it on the cart waiting for service, immediately put a new node back in. It will power on and connect to the drives on the front of the system and be up and running again. It is super cool and makes service way easier when you have a bunch of these.
@festro1000
@festro1000 2 года назад
@@ServeTheHomeVideo Okay that's cool would be great for a homelab or maybe a small business with smaller accessible cabinets, though I have to wonder though if there is a more effective way to make everything more accessible; I mean unless you have an open back cabinet wouldn't you need to pull out the whole chassis to access the back to replace the nodes (forgive my ignorance if I'm missing something, I'm new to the whole homelab scene).
@jfbeam
@jfbeam 2 года назад
@@ServeTheHomeVideo Except most software these days detects the physical hardware on which it's running, so effectively putting the drives from one system into another creates a mess. (try that with ESXi...)
@jfbeam
@jfbeam 2 года назад
Indeed. Few design their racks for work in the hot isle. The last computer room I built had only 2' behind the racks. (4' infront) I only had so much room to work with. And nothing ever goes in from the back. (tiny switches, maybe. the larger fabric switches slide in backwards from the front!)
@festro1000
@festro1000 2 года назад
@@jfbeam Maybe a design that has all swappable parts on one side, and things like fans or radiators in the back. Still have to admit all that modularity in a 2U form factor is pretty impressive though if I had to put my thought on design out, maybe make it a 3U and have the drive bays on top of the 2U segment, and have ventilation on top that way combined thermal runoff from disks and components can more easily escape through convection.
@JohnAngelmo
@JohnAngelmo 2 года назад
Does this also come with support for Epyc processors? And if so what is the name?
@ServeTheHomeVideo
@ServeTheHomeVideo 2 года назад
There is an EPYC BigTwin as well.
@JohnAngelmo
@JohnAngelmo 2 года назад
@@ServeTheHomeVideo I found it. The only sad thing is this on the AMD model: 6 Hot-swap 2.5" drive bays 4 NVMe [Gen3] Compared to 6 NVMe Gen4 for the Intel version.
@AchwaqKhalid
@AchwaqKhalid 2 года назад
Hi Patrick from STH 🙋🏻‍♂️
@youtubecommenter4069
@youtubecommenter4069 2 года назад
Patrick, hoping you do a similar review of Intel CPU cores for NVIDIA A100s like you did for 2 x AMD EPYC 7763 CPUs in an ASUS server chassis.
@Cooper3312000
@Cooper3312000 2 года назад
I had no idea you were here in Austin, wow.
@ServeTheHomeVideo
@ServeTheHomeVideo 2 года назад
Moved over the summer.
@SIC66SIC66
@SIC66SIC66 2 года назад
Are those nuts holding the heatsinks on plastic? :/ Thats probably going to break after running under some load for a year
@ServeTheHomeVideo
@ServeTheHomeVideo 2 года назад
That is the standard heatsink retention design for 3rd gen Xeon Scalable. Both Cooper and Ice Lake versions. Next gen Sapphire Rapids also uses these. So the entire industry is using them
@jfkastner
@jfkastner 2 года назад
The node connector has 15 x 16 pins, are they all used? considering power demands that does not leave that many left for actual I/O ... great review, thanks!
@aimannorazman7959
@aimannorazman7959 2 года назад
Those, are really long power supplies eventhough there's only 2 of them
@ewenchan1239
@ewenchan1239 2 года назад
I'm confused. The still image for the video says 288 cores, but in the video itself, you mentioned that it can go up to 40 cores per CPU, which means upto 80 cores per node or 320 cores per chassis. (40 cores per socket * 2 sockets per node * 4 nodes per chassis = 320 cores)
@paulvancyber1979
@paulvancyber1979 2 года назад
wow!
@markwoll
@markwoll 2 года назад
Looks like our Nutanix systems.
@BrianPuccio
@BrianPuccio 2 года назад
Hasn’t SuperMicro been the OEM for Nutanix for several years now?
@ServeTheHomeVideo
@ServeTheHomeVideo 2 года назад
That is a wise observation.
@ServeTheHomeVideo
@ServeTheHomeVideo 2 года назад
Nutanix even had co-located employees watching over production years ago.
@yaronilan2317
@yaronilan2317 2 года назад
How is this product different than a "regular" blade enclosure with 4 blades? In this day and age of virtualization, aren't blade systems a thing of the past?
@ServeTheHomeVideo
@ServeTheHomeVideo 2 года назад
Two main differences. First, this is 2U. Most blade enclosures are larger. Second, this does not have the integrated switching that you normally find in data centers. 2U4N is very popular in virtualization farms.
@jonathanbuzzard1376
@jonathanbuzzard1376 2 года назад
@@ServeTheHomeVideo It is also highly popular in HPC too I should know have lots and lots of them at work. My beef with them is the C20 power sockets required because some backward countries have 110VAC mains supplies. In more sensible countries with 230VAC a standard C14 socket is able to deliver enough power.
@jfbeam
@jfbeam 2 года назад
@@jonathanbuzzard1376 Did you look at the power connectors on this one?
@petermichaelgreen
@petermichaelgreen 2 года назад
​@@jonathanbuzzard1376 The power supplies in the one reviewed here don't support 120V, they are specified for 208V to 240V supplies. They still need the C20 inlets to cover their power requirements while complying with the IEC current ratings for the connectors (for reasons I don't fully understand UL will approve connectors at higher than their IEC ratings, but that only applies to north America).
@LowLightVideos
@LowLightVideos 2 года назад
@Yaron VPX was last updated in 2019. You can get 12+ blades in a 3U, with 6U depending upon whether the chassis takes large or small boards. Some other differences are 768 watts per board and a much larger board; which can hold more components. Such systems are popular on larger aircraft, with sky's the limit pricing. 3U and 6U are also used on the ground.
@manslayerdbzgt
@manslayerdbzgt 2 года назад
I heard the Intel's discontinuing octane though
@ServeTheHomeVideo
@ServeTheHomeVideo 2 года назад
Where did you hear that from?
@WhenDoesTheVideoActuallyStart
@WhenDoesTheVideoActuallyStart 2 года назад
It isn't.
Далее
Should You Buy an Intel Arc for Your Media Server?
16:36
Grand Final | IEM RIO 2024 | BO5 | КРNВОЙ ЭФИР
6:35:24
$500 Quanta Dual Node Epyc Server - IT'S WORKING!
16:10
Is this an Intel "Graphics Card" with 3 CPUs?
13:51
Просмотров 276 тыс.
Exploring a 2U Inspur Server
20:45
Просмотров 17 тыс.
FASTEST Server Networking 64-Port 400GbE Switch Time!
15:53
TWO AMD Epyc Servers for $500!
20:42
Просмотров 81 тыс.
Inside the FASTEST New 800GbE 64-port Switch
15:03
Просмотров 111 тыс.
AMD ZEN 6 - Next-gen Chiplets & Packaging
16:37
Просмотров 191 тыс.