Тёмный

You don't need Supercomputers for AI! 

Tech Enthusiast
Подписаться 16 тыс.
Просмотров 1,7 тыс.
50% 1

Опубликовано:

 

23 окт 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 13   
@TechEnthusiastInc
@TechEnthusiastInc 3 месяца назад
💥 Check out my NEW COURSE "Introduction to Enterprise IT [2024]" and learn the fundamentals of Enterprise IT in one go and one day! 💥 academy.techenthusiast.com/p/introduction-to-enterprise-it
@aryehbarron4067
@aryehbarron4067 11 месяцев назад
Great video as usual Sir!
@TechEnthusiastInc
@TechEnthusiastInc 11 месяцев назад
Thank you very much, Ary! 👍
@RossCooper-Smith
@RossCooper-Smith Год назад
Another critical need for AI training is that the source data set has to be on flash. It's a small I/O, random read workload, across the entire training dataset. If you do the math, a single NVidia H100 GPU needs the IOPS of around 8,000 hard drives to keep it busy. You can't even use hybrid storage since caching doesn't work for totally random I/O. NVidia have a certification program for storage arrays, and for AI Training they don't certify anything that isn't all flash. And I totally agree with your conclusion, HPE have a really strong portfolio for AI workloads. They're the only vendor with a full stack in-house solution proven at every level of AI training and deployment. From Supercomputing to Cloud, Datacenter to Edge. They've made some very good strategic choices with their focus over the last few years.
@TechEnthusiastInc
@TechEnthusiastInc Год назад
Hi Ross! You are absolutely right, very good point! And thanks for all the detailed specs, appreciated. Indeed, there are quite a lot of critical requirements with AI compared to traditional workloads. Agreed. HPE has all it takes to be one of the key players in the field and it has shown it’s bold enough to make the needed risky first moves too. Very interesting to follow! By the way, congrats on the super interesting VAST Data Platform announcement! We need to get back to that. 😉
@papi-jor9239
@papi-jor9239 Год назад
Hi Markus, Thanks a lot for your informative video again. Is the data stored with HPE Greenlake for AI in your own datacenter or in a datacenter of HPE? Thanks!
@TechEnthusiastInc
@TechEnthusiastInc Год назад
Thanks, my pleasure! The data will need to be stored close to the CPUs and GPUs for the fastest access so it needs to be stored within “HPE Cloud”. In North America HPE is using a Canadian co-lo service provider called QScale where they are building their HPC/supercomputing fascilities and all the training data will be located there. I actually just made a video about all this. Check it out! HPE just announced AI public cloud! (with HPE GreenLake for Large Language Models) ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-dM7HxcPMDZo.html
@RossCooper-Smith
@RossCooper-Smith Год назад
HPE GreenLake for Large Language Models is an HPE AI Cloud service that runs within HPE's own datacentres, but they also have on-premise solutions all the way from Enterprise to Supercompute scale. For example: HPE GreenLake for File (GL4F) is an on-premise enterprise solution running a software stack that's already proven for Top-10 HPC workloads and some of the worlds largest AI Clouds. There's a UK deployment of that stack running a 60PB single namespace for data and 100,000 Kubernetes containers, all powered by around 15,000 HPE CPU & GPU compute nodes. GL4F will handle on-prem workloads from 200TB to 200PB and beyond. And of course HPE have their Cray supercompute division as well. Doesn't matter if you want on-prem or cloud, large or small, HPE have you covered. :-)
@svrangarao1224
@svrangarao1224 11 месяцев назад
Thanks for the information
@TechEnthusiastInc
@TechEnthusiastInc 11 месяцев назад
My pleasure!
@ingridstrombeck8221
@ingridstrombeck8221 7 месяцев назад
TACK!
@TechEnthusiastInc
@TechEnthusiastInc 7 месяцев назад
Varsågod! 😆
Далее
Has Generative AI Already Peaked? - Computerphile
12:48
What is HPE InfoSight?
17:40
Просмотров 16 тыс.
Fastest Build⚡ | Doge Gaming
00:27
Просмотров 935 тыс.
Deep-dive into the AI Hardware of ChatGPT
20:15
Просмотров 317 тыс.
Hyperconvergence - Simple Is Beautiful
14:09
Просмотров 148 тыс.
High Performance Computing (HPC) - Computerphile
11:47
Просмотров 122 тыс.