Тёмный

Getting Started With PyTorch on AMD GPUs: Community & Partner Talk at PyTorch Conference 2022 

PyTorch
Подписаться 52 тыс.
Просмотров 13 тыс.
50% 1

Watch Jeff Daily from AMD present his PyTorch Conference 2022 Talk "Getting Started With PyTorch on AMD GPUs".
This talk will cover everything a developer would need to know to get started quickly using PyTorch on AMD GPUs. The presentation will lay the foundation by introducing the ROCm open software platform as well as HIP (Heterogeneous Interface for Portability), AMD’s dedicated GPU programming environment. Next, we will cover building and installing ROCm PyTorch from source or wheels and how to port existing PyTorch applications and workloads to AMD GPUs. Lastly, we will conclude with recent performance results.
Visit our website: pytorch.org/
Read our blog: pytorch.org/blog/
Follow us on Twitter: / pytorch
Follow us on LinkedIn: / pyto. .
Follow us on Facebook: / pytorch
#PyTorch #ArtificialIntelligence #MachineLearning

Наука

Опубликовано:

 

5 авг 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 40   
@Xaelum
@Xaelum Год назад
If the new AMD gaming gpus support this, I will be using them for my new computer. Great job!
@pieterrossouw8596
@pieterrossouw8596 Год назад
Really good to see some momentum with ROCm, it needs to catch up to CUDA in e.g. Windows/WSL/Docker support. I really like how the Radeon cards generally have more (often double) VRAM than their price matched Nvidia counterparts. If the software side can catch up that'd be great too.
@tempacc9589
@tempacc9589 Год назад
Haven't tested these new drivers yet because I have Nvidia now, but my last ROCM experience was that it was all a bit janky and hard to use compared to Nvidia. If it manages to work as well as they claim then this is pretty huge. AMD GPU's generally offer a ton more VRAM than Nvidia, so this means that if this works without issues, you can get a lot more performance than Nvidia for cheaper. Not to mention that AMD based mini PC's with APU's will be far easier to work with since they just use X86 instead of Nvidia's Jetson ARM (and Jetson SBC's are extremely expensive for how little performance they offer)
@nitinkumarvyas
@nitinkumarvyas Год назад
Proud to be part of the team 🙂
@dragomirivanov7342
@dragomirivanov7342 Год назад
Can you advice on running pytorch on RX6600? We don't need to buy expensive workstation card, only to test the waters. I guess you will loose a lot of sales of hobbyists to 3060.
@Flameancer
@Flameancer 10 месяцев назад
Any insight on if ROCm works on the 7800xt
@vandalsgo9239
@vandalsgo9239 Год назад
The rocm and pytorch-rocm does not run on Windows and you need to wait for months/years to have ROCm to added new cards like 7900XTX and wait linux kernel to add the kernel driver to support, that's why no one use ROCm. AMD need to put it on windows and tract more users to deeplearning using their cards at beginning.
@linhusp2349
@linhusp2349 Год назад
No one use windows to run real actual usable AI application. Start using Linux instead.
@georgioszampoukis1966
@georgioszampoukis1966 Год назад
@@linhusp2349 what about commercial applications? it is not only the training part, it also the inference part that matters. if you are building a commercial application, you will most probably distribute it to windows/Mac and right now you have to resort to directml to cover different hardware configurations (mainly computers with AMD cards), missing out on performance. rocm is massively underdeveloped and it is a shame because the hardware is perfectly capable. Nvidia has been the go to for years due to cuda, AMD has a change to change that with rocm but it seems like they completely ignore it.
@spencerfunk6697
@spencerfunk6697 3 месяца назад
They never will. I predict and will go out of business soon enough
@boogie8455
@boogie8455 Год назад
PYTORCH + AMD will be my SALVATION!
@gustavheinrich5565
@gustavheinrich5565 Год назад
I hope so. Currently it's the source of my headaches.
@me-pk2kb
@me-pk2kb Год назад
you apparently don't know the previous gen of developers using AMD GPU for ML, and how they are burned
@Techonsapevole
@Techonsapevole Год назад
Please benchmark Stable Diffusion nvidia vs amd radeon
@zeezhang77
@zeezhang77 Год назад
where can I try AMD Instinct GPU? Searched on Azure cloud, cannot find AMD MI50-250 in their instances. Any suggestions on lab or cloud env? Thank you.
@fadoobaba
@fadoobaba 4 месяца назад
ROCm is not available on Windows, sadly.
@knowledgelover2736
@knowledgelover2736 Год назад
can we get this on the rx7900? VRAM is the most important thing for rapidly proof of concepting Ai on a work station. AMD Radeon is positioned to capture this.
@holthuizenoemoet591
@holthuizenoemoet591 Год назад
keep an eye out for Tiny-grad, it might just do this
@VTSTech_
@VTSTech_ Год назад
Why didn't someone tell about this sooner ? Some dude in chat just casually is like 'Why don't you use ROCM' Granted it's Linux only, I'm usually on Windows. But I'll use Linux if this works :P
@DavidConnerCodeaholic
@DavidConnerCodeaholic Год назад
Yeh just the change in warp size probably results in a large performance improvement, but I’m curious to know how the differences in GPU cache architecture translate to CUDA programming. There’s no way that CUDA can really use the “infinity cache” or at least there must be differences in how cache benefits/concerns are handled in CUDA code vs GL/Vulkan code. It’s a shame that NVidia has such an iron grip on this stuff because alternative arch/cache designs can improve performance significantly for some workloads. Until OpenMP is ingested through software/system packages it’s not clear that getting off vendor specific code will be possible (probably mencing some acronyms here). However, the simplest approaches in software interfaces that permit code portability b/w compute architectures almost necessarily lose the distinguishing performance benefits offered by any specific architecture. I guess that means that extreme performance gains are still only available to large engineering teams.
@prabhavkaula9697
@prabhavkaula9697 Год назад
sheeeeeeeeeeeeeeeeeeeeeesh
@georgioszampoukis1966
@georgioszampoukis1966 Год назад
That is all great but GPU compatibility with rocm must be heavily extended. There should be no reason why cards like 6600xt, 6700xt and more importantly 7900xtx aren't supported. Rocm needs to exist on windows as well because right now, you have to resort to directml to build commercial applications for windows, when using AMD cards.
@midnari
@midnari Год назад
I believe it's unofficially supported on the 6000 series.
@lekalotte2825
@lekalotte2825 Год назад
Stop using windows. It's that simple.
@georgioszampoukis1966
@georgioszampoukis1966 Год назад
@@lekalotte2825 I don't know if you are capable of understanding, along with other people that say this but, firstly, as I said, when building a commercial application that will mostly run on windows (90% of people are using windows, 2% ubuntu) there is no Rocm support and secondly, and most importanlty, even on ubuntu, THERE IS NO SUPPORT FOR RDNA GPUS. I hope this makes it simple to understand :)
@Blujay188
@Blujay188 Год назад
@@midnari How can I test this?
@paulojose7568
@paulojose7568 8 месяцев назад
@@georgioszampoukis1966 stop using windows
@flo0778
@flo0778 8 месяцев назад
I'll have to buy a new gpu very soon for ML, is it real or not ?
@ragnarlothbrok367
@ragnarlothbrok367 8 месяцев назад
Nothing works, no support for 7800xt, 11.27.2023. Only some OpenCL libs, but it runs at half performance at best.
@OAlexisSamaO
@OAlexisSamaO Год назад
PLSS AMD needing a computer science degree and kill a dragon just to run language models locally is not fun!!, more support for other gpus would be really appreciated also ROCm for windows when?
@francishallare204
@francishallare204 9 месяцев назад
sell AMD card buy Nvidia
@spencerfunk6697
@spencerfunk6697 3 месяца назад
I wanna fight this man
@NamitKewat
@NamitKewat Год назад
Hey AMD, When following will happen? ROCm+pytorch on windows with AMD's Gaming GPUs. ---------------------------- See the Nvidia's stack. Everything is working on windows for them using their gaming GPUs. Yours performance charts are meaningless for regular user because that user is not having instinct gpu in his laptop/PC. At present, It looks like Intel will reach there before AMD.
@JoKeR-hl1np
@JoKeR-hl1np Год назад
seems that they don't care at all, my next GPU will be from Nvidia for sure.
@NathanTice
@NathanTice Год назад
I am writing this to encourage the authors to make improvements. AMD should fire this guy and his entire team, and start all over. It is unlikely that any freshly formed team could get it to work. I suggest that they start their whole system over, and make something work. Not just for the devs, not just on their charts, and graphs, but normal people. Make a system work, for people on their hardware, and in their own homes. AMD does not have their own working product, and instead they lie. HIP is a bandaid, that really doesn't cover outside of the lab. I'd like to suggest nobody else buy products until they're working. While it be great if self-congratulatory drivel such as this was sufficient help to actually make things work, alas it is not. There's no evidence that any of this stuff works beyond this guy's team. Go and search for it, there's NO working examples... it's a horrid mess. Like for example, yes there's hipify pyton in the torch project - but there's no how to demonstrating it working or how to use it. They haven't bothered to document in their own code, much less show it works. Just because it works for the development team, doesn't mean much else. Their document style leaves a lot to desired like basic function. It appears to be merely pretense of function, it is a false front. It is a shell game, there's really nothing here for common people. And note their platform, while some cases might translate, do not on their own - their own documents say start with working CUDA to add confusion. It isn't its own fully functioning system you can use at home. Their support system woefully lacks in function... it's embarrassing. In this day and age, they should offer a bounty on working demos.
@DanielHomeImprovement
@DanielHomeImprovement 11 месяцев назад
thx so much for thd detailed post mate. I was considering purchasing an AMD for ML given that it seemed according to this guy we didnt even need to change the pytorch code, big big disappointment
@raianmr2843
@raianmr2843 11 месяцев назад
they're advertising waaay too early. id wait at least 5 more years before even considering amd for ml and compute tasks.
@JoKeR-hl1np
@JoKeR-hl1np Год назад
no windows support, so windows users with AMD gpus will still suffer from very bad performance and lack of support in Stable Diffusion, they make me regret i bought AMD GPU instead of Nvidia GPUS for the first time 😡😡😡😡
@bbnCRLB
@bbnCRLB 8 месяцев назад
Install a Linux partition.
@JoKeR-hl1np
@JoKeR-hl1np 8 месяцев назад
i already have linux, but even though AMD GPUS performance is less than Nvidia GPUS because of lack of support from AMD 😡😡😡😡@@bbnCRLB
Далее
Китайка Шрек всех Сожрал😂😆
00:20
100x Faster Than NumPy... (GPU Acceleration)
28:49
Просмотров 83 тыс.
I switched back to AMD... and I have no regrets.
24:11
Просмотров 468 тыс.
TensorFlow on AMD GPU! DirectML Tutorial and Testing.
12:54
What are AI Agents?
12:29
Просмотров 120 тыс.
The Story of Next.js
12:13
Просмотров 560 тыс.
PyTorch in 100 Seconds
2:43
Просмотров 886 тыс.