Тёмный

Inside a 400GbE 32-port Teralynx 7 Hyper-scale Switch 

ServeTheHome
Подписаться 623 тыс.
Просмотров 31 тыс.
50% 1

STH Main Site Article: www.servethehome.com/inside-a...
STH Merch on (Tee)Spring: the-sth-merch-shop.myteesprin...
STH Top 5 Weekly Newsletter: eepurl.com/dryM09
We take a look at the Innovium Teralynx 7 hyper-scale switch. Innovium let us borrow the switch and take apart a 32x 400GbE switch that is made to run in hyper-scale data centers. We also get to see the switch pass 12.8Tbps of traffic bi-directionally through all 32 ports.
----------------------------------------------------------------------
Timestamps
----------------------------------------------------------------------
0:00 Introduction
1:41 Hardware Overview
6:44 12.8Tbps Performance
7:33 Power Consumption
12:11 Wrap-up
----------------------------------------------------------------------
Other STH Content Mentioned in this Video
----------------------------------------------------------------------
- What is a BMC? www.servethehome.com/explaini...
- Arista 3.2T 32x 100GbE switch: • Inside an Arista 32x 1...

Наука

Опубликовано:

 

12 июл 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 132   
@NetBandit70
@NetBandit70 3 года назад
Cheap used 100GigE switches: SOON
@Felix-ve9hs
@Felix-ve9hs 3 года назад
*Soon™*
@ewenchan1239
@ewenchan1239 3 года назад
"Cheap" is a relative term. I already have 100 Gbps Infiniband deployed in my basement. (with a 36 port, 100 Gbps Infiniband switch)
@nathancreates
@nathancreates 3 года назад
@asdrubale bisanzio As noise dampening.
@George-664
@George-664 3 года назад
Used arista 7060cx are not very expensive
@Nagilum3
@Nagilum3 3 года назад
Even new it's not super expensive anymore.
@Mireaze
@Mireaze 3 года назад
I kinda want to get into the high end networking market. Simply so we can have more interesting colours than just dark grey/black and blue
@steveheist6426
@steveheist6426 3 года назад
Just remember that whatever color you do has to look professional / bland.
@udirt
@udirt 3 года назад
Good reason to miss Extreme Networks, Acacia, or Gadzoox for FC. idk which other ones I miss that had pretty looks?
@yt2094321001
@yt2094321001 3 года назад
Build firewalls, those are allowed to be red
@Mireaze
@Mireaze 3 года назад
@@steveheist6426 it's gonna be neon pink, and they'll just have to deal with it
@recklessthor4
@recklessthor4 3 года назад
Do it!!
@dupajasio4801
@dupajasio4801 3 года назад
So hard to see guts of these anywhere else. And the tech details as well without all the marketing fluff.
@edc1569
@edc1569 3 года назад
Ahh I was looking for a new switch for the summer house, this one looks like it will fit the Bill perfectly!
@Nagilum3
@Nagilum3 3 года назад
For the network or as main heating? ;P
@suntzu1409
@suntzu1409 2 года назад
@@Nagilum3 for cooking and heating
@suntzu1409
@suntzu1409 2 года назад
For networking, right? Right?
@suntzu1409
@suntzu1409 2 года назад
For studies, right? Right?
@jonathangroggins116
@jonathangroggins116 3 года назад
I love your enthusiasm, really passionate about technology and you clearly convey the details about everything! Thanks dude, great work - I've subscribed!
@ErraticPT
@ErraticPT 3 года назад
I'm still lusting after a 10Gb network, nevermind a 400Gb one! 😂
@Xcieg
@Xcieg 3 года назад
Haha, me too. I just got Gigabit internet too.
@KayJay01
@KayJay01 3 года назад
@@owowowdhxbxgakwlcybwxsimcwx 100% the Aruba lineup is fantastic. Stay away from MikroTik though
@unixpro1970
@unixpro1970 Год назад
@@KayJay01 What is wrong with MikroTik?
@prodeous
@prodeous 3 года назад
I just upgraded my backbone at home to 10Gbit/s SFP+ ports.. (old used server hardware really makes it possible) but now this .. 400GbE... ouch.. I want.. I dont' know why and know I'll never use even a fraction of it, but i want it for home use LOL Another great review.
@Vatharian
@Vatharian 3 года назад
I dipped my toes just right now in 40Gb at home. There is virtually zero uplift from 10Gb, since none of my hardware is capable of sending data down the pipes fast enough (storage bottlenecks). Positive change is that I got rid of OVS setup on desktop PC with 4 dual SFP+ cards and got proper switch, so latency is down an order of magnitude, it uses less space (1U instead of full tower), got back 4 cards to use and got additional 15 ports. Downside is constant fan noise (my switch is notoriously hard to modify for quiet operation -- I guess that's why it was so cheap in the first place), and rather significant power draw (120-180 W constant instead of 60-100 W of old PC-based OVS). Price-wise... QSFP is expensive. DACs are very bend unfriendly, transceivers are very expensive and compatibility is limited, with almost zero force driving for more products. I think 40G in in dark spot for home lab, since most of development now is focused on 100G.
@SmokeytheBeer
@SmokeytheBeer 3 года назад
Glorious B-roll!
@agratefulheart512
@agratefulheart512 11 месяцев назад
Thank you for sharing.
@MrRolloTamasi
@MrRolloTamasi 3 года назад
I'll stick to SG108E for the moment. Even if they are not quite as hyper scaled... 😃
@Felix-ve9hs
@Felix-ve9hs 3 года назад
I have two of the SG108E + a GS1900-24E, Gigabit Ethernet is totally fine for a Homelab :)
@denvera1g1
@denvera1g1 3 года назад
This should be a nice incremental upgrade for my home network :P
@starbuk138
@starbuk138 3 года назад
Would have loved a bit more info on what that Xeon D was for...? And also would love to see temperature graphs of the QSFP cages as they start pushing 400Gbps...
@ServeTheHomeVideo
@ServeTheHomeVideo 3 года назад
The Xeon D is for running SONiC. Do not have temp charts for the cages though.
@hgbugalou
@hgbugalou 2 года назад
Ah yes, when the patch cables cost more than the computer I am typing this on, you know you are pushing packets.
@sagumekishin5748
@sagumekishin5748 2 года назад
Edgecore has released DCS810, same 12T switching but P4 programmable with Tofino 2, would be cool if you can do a video on that.
@jasonmeinhardt4631
@jasonmeinhardt4631 3 года назад
The real question is when can I put one in for my home network 🤔
@vmoutsop
@vmoutsop 3 года назад
That is bad ass. Wish I could put that in my home. Ha ha🤣
@MalteJanduda
@MalteJanduda 3 года назад
Afaik 400G uses 8x 50G lanes. So if you want to break out to standard 100G (4x25G) you have to reduce the lane speed by half and then you can break out 64x100G and not 128x100G.
@jacob_90s
@jacob_90s Год назад
Can't wait to see Linus upgrade to one of these next year
@ServeTheHomeVideo
@ServeTheHomeVideo Год назад
We actually have a 64x 400GbE switch in the lab that you will see on the STH main site in January. Not sure if we are going to do a video on it though.
@AnuragTulasi
@AnuragTulasi 3 года назад
Patrick make a video on functions of different network ports like BNC, IPMI, CONSOLE, Out-of-Band Management and In-Band Management
@ServeTheHomeVideo
@ServeTheHomeVideo 3 года назад
Good idea
@SimmanGodz
@SimmanGodz 2 года назад
At some point, equipment will just be 99% heatsink.
@ServeTheHomeVideo
@ServeTheHomeVideo 2 года назад
Just to give you a sense, in some high-end segments we are at around 10% of system power to run fans for cooling already. Projections go into 20% in the next few years if the transition to liquid does not happen.
@Xcieg
@Xcieg 3 года назад
I wonder if this will be fast enough to run my gigabit internet. 😅
@DJ-Manuel
@DJ-Manuel 3 года назад
Can those 400 gbits borts be easily be split up to 4x 100 gbit? If so I‘ll need to order a few of those 😅
@ServeTheHomeVideo
@ServeTheHomeVideo 3 года назад
Yes. 128x 100GbE or 64x 200GbE as well. Aggregation is a majors use case for this.
@DJ-Manuel
@DJ-Manuel 3 года назад
Ok, thanks, now just going tp search where i can get my hands on a few of those and emptying my wallet 😅👍
@vutesaqu
@vutesaqu 3 года назад
What are you using it for?
@DJ-Manuel
@DJ-Manuel 3 года назад
@@vutesaqu for CEPH clustering with about 6 servers or more in future with about 16 SSDs (some of them nvme) per server and a lot of HDDs, but multiple for redundandency reasons, and to connect to clients with 25/40/50/100GbE. Servers are used for data-streaming so the clients require minimal bandwith (10-20gbit/s) all the time.
@Vatharian
@Vatharian 3 года назад
@@ServeTheHomeVideo 8x SFP56 to QSFP-DD adapters are very 1U unfriendly, but if they work, 256 servers can be connected. With fat nodes this could serve whole aisle. Little extreme, but cost savings would be HUGE.
@0bsmith0
@0bsmith0 10 месяцев назад
Marvell has since purchased Innovium, not surprising as Marvell really wants to stay at the #2 spot behind Broadcom.
@youtubecommenter4069
@youtubecommenter4069 3 года назад
"...but realistically, we're not going to see 400 GbE adapters until we get to PCIe Gen 5.0 in 2022...", 8:17. You sure about that, Patrick? Could be sooner.
@ServeTheHomeVideo
@ServeTheHomeVideo 3 года назад
400GbE is being deployed today, but with PCIe Gen 5 you can do it in a x16 slot making it more accessible
@haidarvm
@haidarvm 2 года назад
@@ServeTheHomeVideo And how about the Nvme drive that support 400 GbE Read / Write ?
@tadehdavtian388
@tadehdavtian388 Год назад
You connect this to storage servers? NAS?
@playdoh1975
@playdoh1975 3 года назад
now that's some I/O
3 года назад
What's this tool called for the Traffic generation? Is this a software or Hardware Solution?
@ServeTheHomeVideo
@ServeTheHomeVideo 3 года назад
This was a Spirent solution.
@barney9008
@barney9008 2 года назад
If QSFP+ can go from 100 to 400 gb/s how fast can they do before you have to change the connector?
@ServeTheHomeVideo
@ServeTheHomeVideo 2 года назад
To me the bigger question is co-packaged optics. We did a video on that years ago
@KeithTingle
@KeithTingle 3 года назад
"cool the optics"
@OVERKILL_PINBALL
@OVERKILL_PINBALL 3 года назад
What is crazy is that one day this will be in the bargain bin! lol
@Hobo_X
@Hobo_X 3 года назад
i cant even imagine what the price of something like this is
@ServeTheHomeVideo
@ServeTheHomeVideo 3 года назад
Usually not too bad. The hyper-scale companies get great pricing
@jfbeam
@jfbeam 3 года назад
@@ServeTheHomeVideo Compared to the $25 and $199 switches sitting on my bench... sure. (and no, they aren't netgear, or Cisco SG's) These things aren't going to be landing on eBay (etc.) for a long time, and when they do, they still won't be remotely cheap.
@hectororestes2752
@hectororestes2752 3 года назад
@@ServeTheHomeVideo Come on, whats the price of the unit in this video?
@BDBD16
@BDBD16 3 года назад
@@hectororestes2752 Similar Juniper equivalents come in around the 50K mark depending on your source.
@zipp4everyone263
@zipp4everyone263 3 года назад
A lower level breakdown of the use of a 400Gb switch is: *Power consumption* With fewer units you save money on the power bill. Doesnt matter if the replacement switch uses 5x the power compared to the five switches it replaces, the heat dynamics arent perfectly scaled either. That means that 5 units running at 100watts each doesnt equal one unit running at 500watts. The one unit running at 500watts will end up using less power due to a more efficient power distribution system and better cooling. You have to calculate the loss to thermal mass and heat radiation as well. *Heat management* With fewer units running you can spend less money on the cooling systems, or at least scaling down the effective cooling needed. See the note above. *ROI with and without Pc and Hm* An important feature with all of this is the potential ROI. If you spend $50.000 on one unit that replaces five or ten other units, the cost of cooling, management (which is an actual cost), licences, replacement stock, shipping, replacement time, downtime, fault checking time and loads of other concerns, will be lowered. Sure, you can theoretically deploy 50.000 switches and have one person keep track, but you cant have one person serve them all if something happens. You also have to keep in mind that these switches are for DC use primarily (or really large nodes, however i dont think these would be used for anything else than dcnode cases, dont want to sever too many users from one node due to heat, power and redundancy issues), fault checks are a real part of a NOC techs job. If you can have one person serve the same capacity as it would normally take 5 people to do, you make that change happen. Meatbags are expensive. *Physical space benefits* Due to it replacing several other units you can over a span of 5 of these probably save an entire rack full of patch panels, cable management, lower switches etc. Remember, DC's dont grow on trees, they leaf spine.... sorry for that one.... Space is always an issue. *Smaller cable runs* Fewer cables that you need to keep track off means happier techs when shit hits the fan. *fewer hops / simpler logical topography* This bundles well with the previous points. You have to remember that not all packages (or frames for layer 2 devices) go in straight north/south jumps (to and from the internet), they often go between servers and switches in the internal structure. If you look at an example where this switch could be a good use of power: Imagine you have a cloud provider. When you create a new node in your AWS system you basically rent cores and memory modules from a resource cluster. Imagine the structure youd need to deploy to keep a bussiness up and running if that bussiness use 400 cores and a few hundred terrabytes of RAM. You cant just deploy 400 cores, you need to always have the power to give the user those 400 cores woth of processing cycles. To do that, and cut down on operating costs by renting out the same core several times by pooling the cores and only really selling the cycles, you need way more cores than the user pays for. Complicated? Sure... The idea here is that you need a fast and caplable infrastructure to be able to link several massive clusers of units. If you can remove 10 100Gb 48p switches and replace those with about 4 of these switches (and divisions) you can shave off a few interconnects, simplify the topology and save on valuable latency. You deal with milions of operations per port per second, remove one nanosecond from each operation and you save one milion nanoseconds of latency per port.... Summa sumarum; Its worth it, trust us.
@acruzp
@acruzp 2 года назад
You spent all this effort writing this wall of text, but you don't seem to understand basic thermodynamics. Funny how the people who lecture the most are usually the ones who know the least.
@zipp4everyone263
@zipp4everyone263 2 года назад
@@acruzp Well, first off it wasnt that hard. secondly, what exactly is it i missed? Its easy to tell people that they are wrong if you dont have to explain why.
@acruzp
@acruzp 2 года назад
@@zipp4everyone263 you're breaking conservation of energy by saying that 5 units running at 100W don't use the same energy and don't produce the same heat as one unit running at 500W.
@davenz000
@davenz000 3 года назад
Great but what does it cost and is this just an engineering prototype?
@ServeTheHomeVideo
@ServeTheHomeVideo 3 года назад
This is an Innovium painted version of a Taiwanese ODM switch. Innovium has over two dozen ODM/ OEMs the last time I checked. Generally they are geared toward selling to hyper-scalers. I think there are a few vendors using the Teralynx 7 that you can go purchase smaller volumes from, but this is more of the hyper-scale version we are looking at
@user-dg2nr5jv3y
@user-dg2nr5jv3y 3 года назад
REminded me of that jokes - Cool, but... can it run Crysis? ))
@Vatharian
@Vatharian 3 года назад
@@user-dg2nr5jv3y It has Xeon D, so with CPU Rendering on... probably yes!
@nicolaslavinicki4029
@nicolaslavinicki4029 3 года назад
@ServeTheHome Why aren't 2.5 Gbe or 5 Gbe popular yet? SSD's have been around for a decade!
@ServeTheHomeVideo
@ServeTheHomeVideo 3 года назад
Low speed client moves much more slowly than data center
@DLTX1007
@DLTX1007 3 года назад
not only does low speed client move slowly, 2.5gbe and 5gbe isn't made use by most and those were interim "standards" that only became reality recently. Very unfortunate as I still want to pay
@ryanreich7635
@ryanreich7635 3 года назад
Where do you see Cisco ACI being used vs systems like this which are more Legacy Networking? ACI uses usually Nexus 9K 10-40Gbps connections for Leafs and 40-100Gbps for Spines, along with the native firewalling that ACI provides.
@ServeTheHomeVideo
@ServeTheHomeVideo 3 года назад
The easy answer is that Cisco, by design, is effectively irrelevant in the markets these serve. The hyper-scale folks have been on a mission for years to replace proprietary Cisco. Given the margin profile Cisco needs to maintain, it was less expensive to design big infrastructure without Cisco. That process has been happening for many years. Cisco is more focused on the higher-margin enterprise. AWS, Google, Microsoft, Facebook, and other hyper-scalers are not using ACI and effectively drive the server, storage, switch, and NIC industries at this point. They are also the driving force behind 25GbE and now 100/200/400/800GbE. That is why SONiC has gained so much traction.
@ryanreich7635
@ryanreich7635 3 года назад
@@ServeTheHomeVideo I agree... Me seeing quotes in my MSP practice to customers for $37k 100gb gbics is INSANE. I love to start pushing other products, but when so called architects have such a hard on for CISCO it's impossible to tell them otherwise.
@Vatharian
@Vatharian 3 года назад
That's one clean PCB. Is the air guide 3D-printed? It doesn't look like injection mold plastic.
@godslayer1415
@godslayer1415 3 года назад
Content Free as usual
@platin2148
@platin2148 3 года назад
Isn’t that a Spirent test switch those things are expensive af and you pay per port.
@ServeTheHomeVideo
@ServeTheHomeVideo 3 года назад
I am aware. That is why we have been using Cisco's TRex on our lower-end router reviews.
@platin2148
@platin2148 3 года назад
@@ServeTheHomeVideo Ahh i had the pleasure to have such a thing on hand when we developed our switch chip sadly the ui looks a bit dated. But compliance is pretty good when you don’t have the option to go to a networking feast or buy all that other hardware.
@jfbeam
@jfbeam 3 года назад
Yeap. And yes, they are rather pricey. We used the VM version for much of our testing, but we did have much older (1G) hardware units. (recycled long ago, they were just too slow, and the power supplies keep burning out. rather expected of 20yo servers.)
@shapelessed
@shapelessed 3 года назад
You know... Big companies don't really like buying new stuff to find out it obverheats and shuts itself down...
@privettoli
@privettoli 2 года назад
How much does it cost?
@ericnewton5720
@ericnewton5720 3 года назад
Why do network switches always have network ports in front and most servers always have network ports in the rear
@ServeTheHomeVideo
@ServeTheHomeVideo 3 года назад
We use reverse airflow switches in the lab, so the ports are on the same side as the server ports. I think in this case it is because the design is limiting trace lengths between the switch and the QSFP-DD connectors. Given that is where most of the heat is generated, my sense is that you would want that on the cold aisle to get cooler air.
@johnmijo
@johnmijo 3 года назад
Can you really have TOO much bandwidth ?!? I say no ;)
@raditiyavalendeto4112
@raditiyavalendeto4112 3 года назад
linus would love this
@ServeTheHomeVideo
@ServeTheHomeVideo 3 года назад
This week we are doing some systems that are higher-end than what Linus typically uses. This is just the first of four in our "Big Week" series focusing on big compute, storage, and networking.
@davenz000
@davenz000 3 года назад
@@ServeTheHomeVideo Burn.
@ServeTheHomeVideo
@ServeTheHomeVideo 3 года назад
Ha! Not at all. Jake at LTT showed me setting up the 25/100GbE switch they did a video on earlier this year. I sent him this one.
@tjmarx
@tjmarx 3 года назад
LTT is a different niche to STH. LTT is more for "gamers" and reviewing new consumer grade hardware. STH is more about servers, higher end networking and repurposing enterprise grade hardware in the home/lab. They compliment each other, not compete.
@ServeTheHomeVideo
@ServeTheHomeVideo 3 года назад
@@tjmarx Totally correct.
@SirHackaL0t.
@SirHackaL0t. 3 года назад
Don’t show this to Linus!
@ServeTheHomeVideo
@ServeTheHomeVideo 3 года назад
I already sent it to Jake @ LTT since he gave me a preview of their 25/100GbE switch.
@RavingMad
@RavingMad 2 года назад
Honestly I feel you guys should rename the channel to ServeTheEnterprise.... Other than your tiny mini micro series and occasional home network worthy stuff, your videos are mostly geared at enterprise gears.... I'm not sure what led to the name ServeTheHome... What home? Don't get me wrong, I'm a subscriber and I love learning about these things, watching your videos and reading your articles. But the name doesn't make sense, that's all.
@davidjwillems
@davidjwillems 3 года назад
Why is this on ServeTheHome? Who is gonna put this in their HOME?
@leefhead1
@leefhead1 3 года назад
to see what i can buy in 20 years.
@ServeTheHomeVideo
@ServeTheHomeVideo 3 года назад
Over the last ~12 years we have grown a lot, as you would expect. We now cover the big server, switch, and networking products along with the edge side, and have done so for many years. At this point, this is somewhat akin to asking why the Wall Street Journal covers more than a single road in NYC :-)
@v4lgrind
@v4lgrind 3 года назад
I mean we have 40/50Gbit switches from the used market at home today. These are in our future.
@ServeTheHomeVideo
@ServeTheHomeVideo 3 года назад
@@v4lgrind Totally. That is the idea. We have purchased 32x 100GbE switches for well under $1500.
@rivervasquez5150
@rivervasquez5150 3 года назад
Also, serve the home is a callback to the linux home directory, not home as in houses, it's listed in the about page on their site.
Далее
FASTEST Server Networking 64-Port 400GbE Switch Time!
15:53
FS S5860-20SQ 10GbE, 25GbE, and 40GbE Switch Review
28:09
Comparing 9 Popular SFP+ to 10Gbase-T Adapters
11:03
Просмотров 104 тыс.
Dell's 200GbE Surprise - S5248F-ON
16:53
Просмотров 23 тыс.
MikroTik CRS328-24P-4S+RM Video Review
16:46
Просмотров 88 тыс.
THIS 25GbE Server and Firewall Has it All
22:21
Просмотров 120 тыс.
Why Do Electronics Die?
5:54
Просмотров 2,2 млн
A Fun Data Center Tour at PhoenixNAP
15:18
Просмотров 115 тыс.