Тёмный

AMD GPU's are screaming fast at stable diffusion! How to install Automatic1111 on windows with AMD 

FE-Engineer
Подписаться 3,1 тыс.
Просмотров 42 тыс.
50% 1

Update March 2024 -- better way to do this
• March 2024 - Stable Di...
Alternatives for windows
Shark - • Install Stable Diffusi...
ComfyUI - • AMD GPU + Windows + Co...
Getting Stable diffusion running on AMD GPU's used to be pretty complicated. It is so much easier now, and you can get amazing performance out of your AMD GPU!
Download latest AMD drivers!
Follow this guide:
community.amd....
Install Git for windows
Install MiniConda for windows (add directory to path!)
Open mini conda command prompt
conda create --name Automatic1111_olive python=3.10.6
conda activate Automatic1111_olive
git clone github.com/lsh...
cd stable-diffusion-webui-directml
git submodule update --init --recursive
webui.bat --onnx --backend directml
If you get an error about "socket_options"
venv\Scripts\activate
pip install httpx==0.24.1
Great models to use:
prompthero/openjourney
Lykon/DreamShaper
If looking for models on hugging face...
they need to have text-to-img
libraries: check onnx
Download model from ONNX tab
Then go to Olive tab, inside Olive use the Optimize ONNX model
when optimizing ONNX model ID is the same as you used to download
change input and output folder names to be the same as the location the model downloaded to.
Optimization takes a while!
Come back and I will have some other videos about tips and tricks for getting good results!

Опубликовано:

 

26 сен 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 565   
@FE-Engineer
@FE-Engineer 9 месяцев назад
Converting civitai models to ONNX -> ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-cDrirEtmEqY.html
@nomanqureshi1357
@nomanqureshi1357 8 месяцев назад
thank you i was just looking for it 😍
@_JustCallMeRex_
@_JustCallMeRex_ 8 месяцев назад
Hello. I would like to ask something regarding the installation process, at the point where it begins creating the venv folder in Stable Diffusion. I have an AMD Graphics Card, specifically an RX 580, I accidentally updated Stable Diffusion by adding the git pull command on the webui text file and it broke Stable Diffusion because apparently it had installed torch version 2.0.1. Now, I tried deleting everything and starting out fresh by following your guide, but for some reason it keeps on installing 2.0.1 torch version. How do I prevent this from happening? Is there anyway to specify it to install torch 2.0.0 again? Thank you.
@OneTimePainter
@OneTimePainter 9 месяцев назад
Finally a tutorial that makes sense and doesn't reference 3 other unnamed videos. Thank you!
@FE-Engineer
@FE-Engineer 9 месяцев назад
Glad you liked it. I try to boil things down and go start to finish completing a task.
@Mewmew-y4m
@Mewmew-y4m 8 месяцев назад
i know what youtuber your referring too HAHAHA
@CahabaCryptid
@CahabaCryptid 9 месяцев назад
This new process is significantly easier to get SD running on AMD GPUs than it was even 6 months ago. Thanks for the video!
@FE-Engineer
@FE-Engineer 9 месяцев назад
You are welcome! And I agree. It is a lot easier than before. And with ROCm on Linux you get to do everything. Hopefully they will finish getting ROCm onto windows.
@dumiicris2694
@dumiicris2694 4 месяца назад
@@FE-Engineer what is the vram requirement on amd as high as before? or comparable now with nvidia?
@bhaveshsonar7558
@bhaveshsonar7558 2 месяца назад
​@@dumiicris2694 vram doesnt work like that
@dumiicris2694
@dumiicris2694 2 месяца назад
@@bhaveshsonar7558 what ure saying it ocupies less bytes? But ure talking about the speed thats why ure saying that and yeah vram has a bus of 192 or 256 bytes or 512 but it still works the same imagine instead of 64 bits of whitch u need 2 bytes instead video card needs th whole line yeap but my man it works the same but thats the technology with clocks so it needs more bytes on ram cause thats the way vram works but ram needs a different driver to be used as vram so it has to be bigger so it does not separate the line cause of the slower speed
@2ndGear
@2ndGear 6 месяцев назад
All these other tutorials had me installing python from github for my AMD GPU. Did not realize there was a tutorial on AMD site itself for A1111! Well, time to start over and do it your way. Radeon 6600 XT and all I get for speed is 2/it while you're getting 20+. I have to start over thanks for your tutorials!
@LadyIno
@LadyIno 9 месяцев назад
I'm so gonna try this when I'm home. Just recently I tried running stable diffusion on my xtx (took me half an evening to set it up) and was immediately frustrated how slow everything was. It took around 10 minutes to create 4 batches. I'm a total beginner when it comes to ai art, but your guide is very well explained. I think I can copy your homework 😅 ty for the video!
@LadyIno
@LadyIno 9 месяцев назад
Quick update: This worked perfectly! I can create 4 batches in less than half a minute. Sir, you are a genius. Thanks so much ❤
@FE-Engineer
@FE-Engineer 9 месяцев назад
🙃 I’m glad it helped and worked without issue. As I state in the video. There are a lot of things like inpainting that do not in fact work appropriately. Right now unfortunately to get “everything” you really need to run it in Linux. But full windows support with ROCm should be coming soon ish. So hopefully when you get to the point of wanting the other pieces hopefully ROCm will work on windows and switching over should be easy! Have fun! And thank you for watching and the kind words!
@mgwach
@mgwach 9 месяцев назад
Hey FE-Engineer!! Thank you so much for this tutorial. Glad to see people helping out the AMD crowd. :) I do have a question though.... how come when you run the webui.bat initial setup command you don't get the "Torch is not able to use GPU; add --skip-torch-cuda-test" error? I get that every time I try to install it.
@FE-Engineer
@FE-Engineer 9 месяцев назад
because before I even get that error I add directml to the requirements file for pip to install -- i do this specifically because I know this error is coming.
@Jyoumon
@Jyoumon 9 месяцев назад
@@FE-Engineer mind telling how you do that? extremely new to this stuff.
@xt_raven8842
@xt_raven8842 9 месяцев назад
im getting the same torch error how can we fix it?@@FE-Engineer
@ИскандерКубышкин
@ИскандерКубышкин 8 месяцев назад
@@FE-EngineerReally, why don't you talk about it? And how can we do this
@hobgob321
@hobgob321 8 месяцев назад
Hey did you figure it out? I have the same issue. I tried editing the webui-user.bat/sh but I still get the same error@@ИскандерКубышкин
@Lumpsack
@Lumpsack 10 месяцев назад
Thanks so much for this, I followed the guide, got the same error and fixed with the text in the description - top man, this has saved me from having (yet another) fight with Linux :) Also, top tip on being patent, not my strong suit, thankfully for her my wifes at work, so I had to just pester the kids instead! Now, I too am on the 7900xtx and not getting quite the same speeds, around 17it/s but still a big jump up, so thank you and I look forward to more of your vids. Incidentally, the nice thing here too, is not seeing gpu ram perma-maxxed!
@FE-Engineer
@FE-Engineer 10 месяцев назад
Yea with previous runs a few months or like a year ago. The ram was like always maxed out and would just randomly “out of vram” which drove me crazy having to constantly kill and restart if I made one mistake with a button. Glad it helped! And 17it/s is still really fast overall. That’s still 100 steps in under 10 seconds easily. And probably only about 7 seconds.
@FE-Engineer
@FE-Engineer 10 месяцев назад
For getting up to 20 iterations per second. Just a thought you might consider undervolting your gpu slightly. Like a -10 or something. I think mine is at -10.
@Lumpsack
@Lumpsack 10 месяцев назад
@@FE-Engineer Thats cool, I'll take the slightly slower speed, but thanks - I get the difference now.
@NicoPlayGames96
@NicoPlayGames96 10 месяцев назад
Dude thank u so much, u help me a lot. Im from germany and this video was still very understandable and thx to u i can now have fun on stable diffusion :)
@FE-Engineer
@FE-Engineer 10 месяцев назад
You are very welcome! I’m glad it helped! Next tutorial is for running with Rocm on Ubuntu.
@Sod1es
@Sod1es 3 месяца назад
the onnx and olive tabs aren't not showing
@iskiiwizz536
@iskiiwizz536 4 месяца назад
I get the 'launch.py: error: unrecognized arguments: --onnx --backend directml' error at 9:23 even if i put the two lines of code
@FE-Engineer
@FE-Engineer 4 месяца назад
Code has been updated. Lots of changes
@LeLeader00
@LeLeader00 10 месяцев назад
Very good Video, I was having trouble installing SD in my amd pc, thank you
@FE-Engineer
@FE-Engineer 10 месяцев назад
I honestly got tired of trying to get it up and running and coming back to it being entirely broken and figuring out how to get it back up. I figured others might appreciate skipping the junk (hopefully) and having a straight forward guide to just get it up and running! I’m super glad it helped and hopefully was easy and got you up and running quickly!
@AdmiralPipito
@AdmiralPipito 10 месяцев назад
Rx 580, works, 20 sempling steps, 512/512 15-20 sec, you can add your args, work little faster, 1024x512, or 1024x640, 40-50 sec
@FE-Engineer
@FE-Engineer 10 месяцев назад
That’s awesome! Glad it is working! :)
@BhillyJhordyRamadhan
@BhillyJhordyRamadhan 10 месяцев назад
I have nitro+ rx 580 se, but my pc is always restarted if i try to generate image
@AmandaPeach
@AmandaPeach 10 месяцев назад
@@BhillyJhordyRamadhan get your amd drivers, go to optimizations and try to lower your frequency and power bars to -15% adn try again. i had to "underclock" and "undervolt" my gpu to not restart my pc and turn my gpu fans liek a airplaine fan
@BhillyJhordyRamadhan
@BhillyJhordyRamadhan 10 месяцев назад
@@AmandaPeach it's work, thanks bro
@caseydwayne
@caseydwayne 8 месяцев назад
I'm jealous. 2x 8gb RX 570s and I've just torched an entire week trying to get this running ... Tried Windows, Docker, Linux. Closest I've managed is 5-6s/it with pure CPU on Linux.
@SyntheticSoundAI
@SyntheticSoundAI 10 месяцев назад
If anyone wants a simple batch file to automatically start SD without typing it all in, here ya go. Just replace the cd directory with the location of your stable diffusion webui. @echo off call conda activate sd_olive cd C:\Users\YOURUSER\sd-test\stable-diffusion-webui-directml webui.bat --onnx --backend directml
@langinable
@langinable 10 месяцев назад
Thank you! This helped.
@FE-Engineer
@FE-Engineer 10 месяцев назад
;)
@andrexbeta2575
@andrexbeta2575 5 дней назад
I swear to god! I just followed so many guides to get my RX 6700 XT running SD, and NOTHING!!!! I always crush against the same FUCKING WALL!! The torch not able to use GPU shit. Tested so many shits to fix that problem, but it just never didappear. Tried so many commands on user-batch, version-requirements.txt, on cdm, anaconda, different versions of git and python, NOTHING!!!!
@bog2132
@bog2132 2 дня назад
AMD Windows users are just in the absolute worst position to get AI working. You have to do so many work-arounds to get it to kind of work, and the performance still isn't great. I recently tried Amuse AI which was made with AMD in mind, it works well, but the thousands of models that people have already trained for stable diffusion use a different format, and there doesn't seem to be a site where people have converted all of these to Onnx. So yeah, we are still left behind.
@andrexbeta2575
@andrexbeta2575 2 дня назад
@@bog2132 I swear, more I experiment with my GPU beyond videogames, more I regreat getting myself an AMD one and not an Nvidia....
@CreepyManiacs
@CreepyManiacs 9 месяцев назад
I didnt have the ONNX and Olive tab, I just add --onnx in commandline args and it seems to work XD
@FE-Engineer
@FE-Engineer 9 месяцев назад
Strange. Well if it is working correctly then that is all you can ask for.
@SkronkJappleson
@SkronkJappleson 10 месяцев назад
Thanks, I got it going a lot faster on my RX 6600 because of this
@FE-Engineer
@FE-Engineer 10 месяцев назад
That’s awesome! Glad to hear it helped!
@thaido1750
@thaido1750 10 месяцев назад
how many it/s does your RX 6600 have?
@SkronkJappleson
@SkronkJappleson 10 месяцев назад
@@thaido1750 after using it a bit I decided to just use my other machine with rtx 3060. I could get 2.5 it/s with the 6600 (a little more if i overclocked) and then you have to use their crappier sampling method as well. for comparison, rtx 3060 gets around 7 it/s without trying to overclock with xformers installed
@evetevet7874
@evetevet7874 9 месяцев назад
@@FE-Engineer help, i get "Torch is not able to use GPU" error plss
@xIndustrialShadoWx
@xIndustrialShadoWx 10 месяцев назад
Thanks man!! Questions: How do you update the repository safely keeping all your models and extensions? Also, how do you reset the entire environment if things go tits up?
@FE-Engineer
@FE-Engineer 10 месяцев назад
Just git pull to update. The folders for models and stuff should not have any problems. Resetting the entire environment. Delete the venv folder will blow away the virtual environment stuff. Then you just need to re install all of those tools to get back to a hopefully working environment. I’m really bad cases you can of course move or copy your models and delete everything but that would be if you were really having problems that you could not get working properly.
@optimisery
@optimisery 7 месяцев назад
Great tutorial, thank you very much! One thing worth mentioning is that conda virtual env (like any venv for python) is not really "virtual machine", but rather a bunch of env variables that's set/activated for the current shell, so that when you're running anything under this context, binaries/libs are searched for within the context. Nothing is really "virtualized"
@rwarren58
@rwarren58 6 месяцев назад
I am a rank beginner. I would appreciate an explanation of what you mean by "virtualized". Thanks if you reply. It's a month old thread.
@uffegeorgsen372
@uffegeorgsen372 10 месяцев назад
спасибо, друг. Я уже хотел выбросить свою карту AMD, но попался твой ролик. Все получилось, искренне благодарю!
@FE-Engineer
@FE-Engineer 10 месяцев назад
That is fantastic! I am glad it worked! :)
@kenbismarck4999
@kenbismarck4999 8 месяцев назад
Hello, great comprehensive tutorial video, nicely done man :) Timestamping the vid would be awesome. Say, i.e.: 2:00 mins after the intro, beginning of the main part of the vid :) Best regards
@FE-Engineer
@FE-Engineer 8 месяцев назад
Oh you mean time stamping in the video description? RU-vid pretty automagically separates the videos fairly well into sections. Which is crazy convenient.
@shakewait7612
@shakewait7612 10 месяцев назад
Excited for new content! Well done! How to relaunch Stable Diffusion URL without reinstalling all over again? Also about the sampling methods - where are the other samplers like DPM++ 2M Karras?
@FE-Engineer
@FE-Engineer 10 месяцев назад
Unfortunately right now, I don't know how many improvements will be made with this repo. It is still actively worked on, but as you can see, some of the samplers are simply missing. I also saw the person who built and maintains this repository is also helping out with SD.Next. I tried SD.Next...and did not find it working as well as I would like, but it is a bit simpler in some respects.
@shakewait7612
@shakewait7612 10 месяцев назад
Like you I have a 7900XTX and I REALLY enjoy the speed boost, thanks! Just wish there were more working features inside the UI. You had commented somewhere about best of both worlds. Excited to see what what's in store@@FE-Engineer
@FE-Engineer
@FE-Engineer 10 месяцев назад
Same here. As a quick update. I tried installing Rocm on windows subsystem for Linux. Cool. It worked. But you essentially can’t get the graphics card passed through or at least…not really. So then I was like…well what about Ubuntu desktop. Then we at least have a working gui basically. Ran into a lot of problems there. Blew away dual boot Ubuntu desktop. Accidentally wiped an entire hard drive that I was using…and now I’m doing it straight through Ubuntu server. That’s why I have mostly been quiet for a few days. Working through some issues so that hopefully I can get a good clean tutorial up showing something worth seeing!
@el_khanman
@el_khanman 10 месяцев назад
@@FE-Engineer please let us know if you have any success with dual booting. I got it working nicely on my first try, then accidentally broke it, and have not been able to get it working again (even after countless fresh reinstalls).
@leandrovargas615
@leandrovargas615 10 месяцев назад
Thank you bro for the fix!!!A Big hug!!!
@FE-Engineer
@FE-Engineer 10 месяцев назад
Glad it helped!
@shanold7681
@shanold7681 9 месяцев назад
Thanks for the video got it working for me A little sad I'm getting 15 IT/s With my 5800x and 7900xtx :( But still 15 is Way better then 2!!
@FE-Engineer
@FE-Engineer 9 месяцев назад
Yea. My 7900xtx can get up to 18/19 or so. But most of the time depending on what else is running on my computer etc I tend to get somewhere around 15-17 and quickly degrading if I am building larger images bigger than 512x512. But even if you compare this to nvidia GPU’s this is still very fast. Running in Linux will give a lot more features.
@shanold7681
@shanold7681 9 месяцев назад
@@FE-EngineerInteresting I'll have to spin up linux on my system and give it a shot. Sadly there is a lack of Onix models it seems. or maybe its limitations of the windows vertion.
@animarkzero
@animarkzero 4 дня назад
What is faster? This or Zluda with RocM....??🤔🤔
@TrippyRiddimKid
@TrippyRiddimKid 9 месяцев назад
Trying to get this running on a 5600xt but no matter what I do I get "Torch is not able to use GPU". I could skip the torch test but from what I can tell that will just end up using my CPU. I know the 5xxx series can do it as Ive seen others mention it working. Any help?
@FE-Engineer
@FE-Engineer 9 месяцев назад
Yep. Read the video description at the top. It will provide the help you need….
@AdemArmut-g5p
@AdemArmut-g5p 9 месяцев назад
Nice video. But I keep getting the error "Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check" and then it uses CPU only. I have a RX6800, found a lot of ppl with same issue but no one has a solution. Do you have any idea how to fix it?
@FE-Engineer
@FE-Engineer 9 месяцев назад
Not currently. This just happened within the last few days. I’m working on figuring out what’s happening and how to fix it.
@htenoh5386
@htenoh5386 8 месяцев назад
Any luck with finding a fix? Getting this too...@@FE-Engineer
@cmdr_stretchedguy
@cmdr_stretchedguy 2 месяца назад
Only if you have a RX7900XT, ROCm support is limited for the rest of the line and are half or less of the speed. For example my RTX3050 produces 800x1200 images in 20-30 seconds per image (native CUDA) versus my RX7600 does the same in 220-250 seconds per image (via directml). In gaming the RX7600 has almost twice the performance of the 3050 but Stable Diffusion and many AI tools still rely on CUDA.
@diamondlion47
@diamondlion47 10 месяцев назад
Good vid man, gotta show support for open source non ngreedia ai. Nice punk btw.
@FE-Engineer
@FE-Engineer 10 месяцев назад
Haha thank you. I worked for a crypto company and the designers made punks for everyone who worked there. It’s on some chain, I don’t remember which one though to be honest. And yea. Nvidia cards are good. No doubts there. Their prices are just too high for me to stomach personally. :-/
@pnaluigi6344
@pnaluigi6344 10 месяцев назад
You are a hero.
@FE-Engineer
@FE-Engineer 10 месяцев назад
Hahaha thank you very much. I hope it helped!
@HeinleinShinobu
@HeinleinShinobu 10 месяцев назад
your tutorial just break my stable diffusion
@FE-Engineer
@FE-Engineer 10 месяцев назад
I am sorry to hear that.
@HeinleinShinobu
@HeinleinShinobu 10 месяцев назад
@@FE-Engineer fresh git clone it and got "there is no gpu for onnxruntime to do optimization" on olive optimization part. Using rx 6600 xt gpu
@FE-Engineer
@FE-Engineer 10 месяцев назад
Sounds like it can’t find your GPU. Maybe try installing or reinstalling GPU drivers?
@JamesAville
@JamesAville 9 месяцев назад
Thanks for you time, sadly, as a new RX 7700 XT owner... I'm going to sell it back, no tutorial worked out for me. Got so many errors not shown on any video or blog during installation.
@FE-Engineer
@FE-Engineer 9 месяцев назад
Read the first few lines of the video description…they changed the code a few days ago and everything broke. Theres an updated video showing how to get it to work with the current code.
@toketokepass
@toketokepass 6 месяцев назад
I get "runtime error: found no nvidia driver on your system" in the console and gui. I also dont have the ONNX tab. *Sigh*
@FE-Engineer
@FE-Engineer 6 месяцев назад
Check out new video. Just finished recording. Should be up in less than 24 hours.
@richkell1653
@richkell1653 9 месяцев назад
Hi, followed everything running and downloaded the model you use in the vid, however I am getting this error: models\ONNX-Olive\DreamShaper\text_encoder\model.onnx failed. File doesn't exist. My text_encoder folder is in my z: drive and in that is a config.json and model.safetensors file. Any ideas? Btw thanks for your work on helping us poor AMDer's out :)
@richkell1653
@richkell1653 9 месяцев назад
Managed to optimize another model and it works perfectly! Jumped from 2-3it/s to 12.36it/s!!! You SIR do ROCK!!!
@FE-Engineer
@FE-Engineer 9 месяцев назад
If it is saying file not found it means it is looking specifically for a file and it is not there. Why it thinks there should be a file there is harder to figure out. Might just try optimizing again and during optimization it should put a file there
@zankares
@zankares 10 месяцев назад
Tyvm, so helpful
@FE-Engineer
@FE-Engineer 10 месяцев назад
Happy that it helped!
@jinxPad
@jinxPad 9 месяцев назад
great guide, I've been looking at getting back into some SD fun, one question regrding downloading the model from the ONNX tab, does it have to be from huggingface? Or can you download from other sources like civitai?
@FE-Engineer
@FE-Engineer 9 месяцев назад
Yep you can optimize civitai models. I have a video about that.
@duckybcky7732
@duckybcky7732 8 месяцев назад
Everytime I get to collecting torch==2.0.1 my computer freezes like I can’t move my cursor and the clock is stuck on the computer is that normal?
@FE-Engineer
@FE-Engineer 8 месяцев назад
No. That is not normal or at least does not happen to me. Sounds like resource constraints maybe?
@NínGhéin
@NínGhéin 8 месяцев назад
It still tells me that "torch is unable to use GPU" despite the fact that this was designed to use an AMD GPU and I have an AMD GPU.
@FE-Engineer
@FE-Engineer 8 месяцев назад
Yep. Code changes over time. Read the video description. -use-directml
@athrunsblade846
@athrunsblade846 10 месяцев назад
What is the reason for having far less sampling methods on the AMD version? Or is there a way to install more? Thanks for the help :)
@FE-Engineer
@FE-Engineer 10 месяцев назад
I have new videos coming out that should help with this. The specific question you asked, I’m not sure how much support this directml fork of automatic1111 is receiving these days. I know the person who build it is also helping out with SD.Next project as well. Hence why I said I’m not entirely sure how much more support this fork is really getting. I hope this information helps. Also I have new videos coming out about running native with Rocm for and cards.
@Namelles_One
@Namelles_One 9 месяцев назад
Any chance to list models that can run with these? I tried stable diffusion xl, and always getting "assertion error", so, list of models that can be used will be very helpfull, with slower connection is just waste of time to download and try blindly. Thank you!
@FE-Engineer
@FE-Engineer 9 месяцев назад
I’ll have a new video coming out basically outlining which programs can do what with AMD cards because honestly it is all over the board.
@FE-Engineer
@FE-Engineer 9 месяцев назад
As a quick note. I have not been able to get stable diffusion xl working on this one in windows.
@petrinafilip96
@petrinafilip96 10 месяцев назад
Whats considered fast? I do inpainting with batches of 4 pics (so I pick the best one) and it usually takes 3-4 minutes for one batch with RX 6800
@FE-Engineer
@FE-Engineer 10 месяцев назад
I would say getting 8-10+ iterations per second is quite fast. Are you using olive optimized models? Are you increasing resolution when you do this? How many steps are you doing? I would expect 6800xt to perform a bit better to be honest.
@bluevaro505
@bluevaro505 7 месяцев назад
well I followed the steps and my first runtime error was, RuntimeError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check. So, I add it and ran webui.bat --onnx --backend directml --skip-torch-cuda-test. Then it give me the error, launch.py: error: unrecognized arguments: --onnx --backend directml. At a lose as to what to do.
@FE-Engineer
@FE-Engineer 7 месяцев назад
-use-directml. Remove onnx and definitely remove skipping torch cuda test.
@mustafaselimavci4713
@mustafaselimavci4713 7 месяцев назад
I got an error help me plz i dont know what to do, I followed your every single steps aise RuntimeError( RuntimeError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check
@FE-Engineer
@FE-Engineer 7 месяцев назад
Check the video description
@mmeade9402
@mmeade9402 7 месяцев назад
I get a different error message. when running the webui.bat --onnx --backend directml command it runs through and I end up with RuntimeError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check
@mmeade9402
@mmeade9402 7 месяцев назад
This is all just too much. Im vaguely computer literate, but I'm certainly no programmer. Until somebody makes this stuff more user friendly for somebody that just wants to download the software in windows, click install and start messing with it Im going to throw in the towel. Ive gone through 4 different A1111 forks today, and they all toss errors at me while following the destructions. Im sure my 7900xtx is probably making things more complicated. But thats ridiculous.
@FE-Engineer
@FE-Engineer 7 месяцев назад
You can use shark from nod.ai. It is pretty much one button install. And it just kinda works. It’s just super slow. It compiles shaders to use. So it’s quite fast to generate an image. But then if you change models it recompiles shaders. If you change image size. Recompiles shaders.
@_gr1nchh
@_gr1nchh 5 месяцев назад
Any update on this? I just got a 6600 last night (as a test card, I was planning on going with a 2070 super instead for a cheaper price) but I like this card and all of AMD's tools more than nVidia's. If I can get decent results out of this card I'll just keep it. Wondering if there's been any major updates regarding SD on AMD.
@mgbspeedy
@mgbspeedy 5 месяцев назад
Had to add skip cuda test and it worked. But when I try to create an image, it still fails and says there is no NVIDIA GPU. Doesn’t seem to recognize my AMD. Is a AMD RX 580 8g too old to be recognized on stable division.
@FE-Engineer
@FE-Engineer 5 месяцев назад
The code has changed pretty significantly since I made this video. Zluda does not work for the rx580 because it is not supported by hip sdk. But I believe the directML fork should work. You might need the argument -use-directml
@FE-Engineer
@FE-Engineer 5 месяцев назад
Remove anything about onnx.
@mgbspeedy
@mgbspeedy 5 месяцев назад
Thanks for the reply. I’ll give it a shot.
@Rain_Zima
@Rain_Zima 6 месяцев назад
Just a heads up from the future past here, you still need to use Python 3.10.6 and you need to install the version of anaconda that supports its. some people get cannont find cmake in PATH error, usually "conda install cmake" fixes this.
@lenoirx
@lenoirx 8 месяцев назад
"Torch is not able to use GPU" any help? Im using an RX 5600 XT
@FE-Engineer
@FE-Engineer 8 месяцев назад
Code changed. Command line argument is now -use-directml instead of the backend directml piece
@Code_String
@Code_String 10 месяцев назад
How does this compare to A1111 with ROCm on Linux? I tried to run the Olive optimization on my G15AE's RX6800m but it never picked that up. Was wondering if it's worth going through after getting a simple Ubuntu setup going.
@FE-Engineer
@FE-Engineer 10 месяцев назад
7900xtx just recently got Rocm support. I’m going to try it out and see how they compare. I’m trying to get to that today if I can get enough time.
@bodyswapai
@bodyswapai 10 месяцев назад
How was it? I am thinking of buying the card but there isn't fair comparisons on the internet. All of them XTX on DirectlML but rather I am looking for comparions with Linux ROCM. .@@FE-Engineer
@petrpospisil9193
@petrpospisil9193 10 месяцев назад
Thanks a lot for this tutorial, works like a charm! I would like to ask you, do you suggest running command arguments like --medvram? I am using RX 6600 XT with 8GBs and although I had not enough time to test it, I was not able to generate above 512x512 without any cmd args. Also do you happen to know if we are able to convert local checkpoint models instead of relying on uploaded models on hugging face? Thanks a lot bro!
@FE-Engineer
@FE-Engineer 10 месяцев назад
So about command line arguments. A year ago. Memory was very inefficient. After generating like a single image you would run out of memory so they were basically 100% required. Memory optimizations have improved significantly. So you might be able to get away with not running them at all. But if you find yourself getting out of memory errors I would suggest turning them on and trying it out for a while. Performance will take a bit of a hit but that might be preferable to stopping and restarting the ui if it happens a lot. Also. Working on a video for ckpt and safetensor conversions.
@Kierak
@Kierak 9 месяцев назад
@@FE-Engineer Did you finish your "ckpt and safetensor conversions". I really need it :3
@Drunkslav_Yugoslavich
@Drunkslav_Yugoslavich 8 месяцев назад
Is there any way to make the main folder not on C? conda create --prefix /path/to/directory makes a directory in the needed path, but when i do git clone it's just downloads everything to my user folder on C :/
@FE-Engineer
@FE-Engineer 8 месяцев назад
Go to the directory that you want to clone the repo in. Then do git clone there.
@Drunkslav_Yugoslavich
@Drunkslav_Yugoslavich 8 месяцев назад
@@FE-Engineer I can do that only through cmd, not conda. Sorry, I'm not really into any kind of programming, so it's kinda hard to me. I cloned it through cmd and done everything you showed in the vid next, but it just gives me "Torch is not able to use GPU" and gives me the command to ignore CUDA
@TokkSickk
@TokkSickk 7 месяцев назад
@@Drunkslav_Yugoslavich do --use-directml not --backend directml
@Drunkslav_Yugoslavich
@Drunkslav_Yugoslavich 7 месяцев назад
@@TokkSickk Do not work in command line for webui-user.bat, "launch.py: error: unrecognized arguments: not --backend-directml"
@TokkSickk
@TokkSickk 7 месяцев назад
Huh? the current working directml is --use-directml not the backend one. @@Drunkslav_Yugoslavich
@OriolLlv
@OriolLlv 6 месяцев назад
Im getting an error after executing webui.bat --onnx --backend directml. fatal: No names found, cannot describe anything. Any idea how to fix it?
@FE-Engineer
@FE-Engineer 6 месяцев назад
Read the video description
@OriolLlv
@OriolLlv 6 месяцев назад
Which part? I followed all the steps.@@FE-Engineer
@Kii230
@Kii230 4 месяца назад
@@OriolLlv having the same issue. It's because lshqqytiger refactored onnx so -onnx no longer works. Idk how to fix
@vexillen1877
@vexillen1877 10 месяцев назад
I got 3.5 it/s on 6700XT. Which is x2 faster than the default. This is without using any run commands.
@Justin141-w3k
@Justin141-w3k 8 месяцев назад
How the heck do you get it to work with your AMD GPU
@FnD4212
@FnD4212 7 месяцев назад
I got "RuntimeError: Torch is not able to use GPU;" after 8:56 step
@FE-Engineer
@FE-Engineer 7 месяцев назад
Read the video description
@YanTashikan
@YanTashikan 4 месяца назад
RuntimeError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check
@YanTashikan
@YanTashikan 4 месяца назад
did as it asked webui.bat --onnx --backend directml --skip-torch-cuda-test
@YanTashikan
@YanTashikan 4 месяца назад
And got this (Automatic1111_olive) C:\Users\User\stable-diffusion-webui-directml>webui.bat --onnx --backend directml --skip-torch-cuda-test venv "C:\Users\User\stable-diffusion-webui-directml\venv\Scripts\Python.exe" Python 3.10.6 | packaged by conda-forge | (main, Oct 24 2022, 16:02:16) [MSC v.1916 64 bit (AMD64)] Version: v1.9.3-amd-12-gf4b8a018 Commit hash: f4b8a018cc47289502587eb05826dec9b1e5127e no module 'xformers'. Processing without... no module 'xformers'. Processing without... usage: launch.py [-h] [--update-all-extensions] [--skip-python-version-check] etc
@Andee...
@Andee... 10 месяцев назад
Works so far! However hires fix isn't working at all. Just does nothing. Any idea what that could be? I've made sure to put an upscale model in the correct folder.
@FE-Engineer
@FE-Engineer 10 месяцев назад
Honestly. With these directml onnx + olive. A lot of things don’t seem to work appropriately. I’m currently looking at a bunch of alternatives like using normal A1111 with Rocm. And also using sd.next still with directml and onnx. So far I don’t see many things that are nearly as fast though. Still working on it.
@Mr.Kat3
@Mr.Kat3 9 месяцев назад
So from my understanding unless I'm missing something so far I cant get any of my old "Embedings (Textual Inversions)" to work? And I am assuming they don't work in this version which is a huge downside for me. Any info you have on this?
@FE-Engineer
@FE-Engineer 9 месяцев назад
Run ROCm in Linux if you want to be able to do everything. No optimizations or anything in Linux. Just pure regular automatic1111 and everything works.
@mrmorephun
@mrmorephun 9 месяцев назад
I have a (noobish) question...when i close the program and want to restart stable diffusion later, what command should i use?
@FE-Engineer
@FE-Engineer 9 месяцев назад
Check my other videos. I have one specifically about this. I think it is about 3 minutes total. It’s very short.
@Mr.Every1
@Mr.Every1 10 месяцев назад
i have the following error message when i try to optimize .. what can i do ? AssertionError: No valid accelerator specified for target system. Please specify the accelerators in the target system or provide valid execution providers. Given execution providers: ['DmlExecutionProvider']. Current accelerators: ['gpu'].Supported execution providers: {'cpu': ['CPUExecutionProvider', 'OpenVINOExecutionProvider'], 'gpu': ['DmlExecutionProvider', 'CUDAExecutionProvider', 'ROCMExecutionProvider', 'TensorrtExecutionProvider', 'CPUExecutionProvider', 'OpenVINOExecutionProvider'], 'npu': ['QNNExecutionProvider', 'CPUExecutionProvider']}.
@FE-Engineer
@FE-Engineer 10 месяцев назад
I have never seen that error. Try reinstalling. Not really sure because that is not an error anyone else has mentioned.
@16thSD
@16thSD 9 месяцев назад
i got the error "FileNotFoundError: [Errno 2] No such file or directory: 'footprints\\safety_checker_gpu-dml_footprints.json' Time taken: 1 min. 56.9 sec." not sure what i did wrong here....
@FE-Engineer
@FE-Engineer 9 месяцев назад
I have no idea either. The safety checker is usually used when optimizing if I remember correctly. But I have not seen this error.
@Bordinio
@Bordinio 8 месяцев назад
So the whole optimization limited sampling methods, right? Karras etc. are gone.
@FE-Engineer
@FE-Engineer 8 месяцев назад
Sort of. ONNX doesn’t have the ability to use those other samplers. So it’s more of an ONNX format problem rather than the optimization really. You can not run in ONNX mode and then you will have them. But the performance hit is pretty big.
@Bordinio
@Bordinio 8 месяцев назад
@@FE-Engineer aye, thx for the reply!
@bysamuelneves
@bysamuelneves 9 месяцев назад
Here says: RuntimeError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check
@FE-Engineer
@FE-Engineer 9 месяцев назад
Yes. And I updated the video description yesterday to let people know there was something wrong ideally so I did not get comments saying “doesn’t work” or “this is broken”. I also went to the trouble of linking videos to alternatives that DO work currently. But as for now you will have to wait for me to finish editing the video describing how to get around these errors.
@kidcoal33
@kidcoal33 9 месяцев назад
comparing the download steps by using the command you can see in the video at 9:06 that it is installing torch-directml among other stuff(torchvision etc.) . I followed everything as described but this certain package is missing and i assume it has to do something with it. is there a way to force it to install this package too?
@bysamuelneves
@bysamuelneves 9 месяцев назад
@@FE-Engineer bro, I have a Radeon Vega 5 AMD. I'm more than a week having throuble trying to use SD. What can I do?
@Benji.v2
@Benji.v2 6 месяцев назад
hey there! it says cannot use Gpu.. help me pls
@FE-Engineer
@FE-Engineer 6 месяцев назад
Read the video description
@JahonCross
@JahonCross 4 месяца назад
Is this like a beginner guide to SD? I have an amd gpu and cpu
@hellcat3981
@hellcat3981 5 месяцев назад
"Feel free to go and try to make childreen" hahahaha
@FE-Engineer
@FE-Engineer 5 месяцев назад
I mean. If you have 10-15 minutes…might as well put that time to use 😂
@hellcat3981
@hellcat3981 5 месяцев назад
@@FE-Engineer Yeah I totally agree 👍 And thanks for the extremely helpful videos!
@FE-Engineer
@FE-Engineer 5 месяцев назад
You are welcome. I feel so bad. I need to post more videos more often. Life has been hectic my son is in and out of the hospital. But I really need to get more videos up more often. :-/. Thanks for the support. I really do appreciate it! 🙃
@hellcat3981
@hellcat3981 5 месяцев назад
@@FE-Engineer ah shit, i'm sorry for that. I wish him the best to get well soon! Your son is way more important than creating videos, don't worry about RU-vid and use your time with your son!
@bysamuelneves
@bysamuelneves 9 месяцев назад
Remember: the Miniconda version must be the same version of Python installed (both must be 3.10.6)
@FE-Engineer
@FE-Engineer 8 месяцев назад
I’m not sure that statement is accurate. But I’m not really big into conda, so this is potentially true. I just have never had a reason to care what version of anaconda I had and mostly just needed to make environments inside anaconda with specific versions of python to get things to work appropriately. ?
@Blue_Razor_
@Blue_Razor_ 10 месяцев назад
Downloading models using the ONNX tab is super slow, and stops about halfway through. Is there a way I can download the file off of huggingface and just copy and paste it into the ONNX-Olive folder? I tried it with a dreamshaper model I already had downloaded but it didn't recognize it.
@FE-Engineer
@FE-Engineer 10 месяцев назад
I have not had those problems. I’ve found those tabs to be really finicky and easily get messed up. Sorry I can’t be much more help. Keep trying though.
@astraleren
@astraleren 10 месяцев назад
thank you a lot! question though. can i also install the models from civitai? will they also be optimized by onnx? i will still try it out but yeah, just incase!
@FE-Engineer
@FE-Engineer 10 месяцев назад
You can definitely try. I had a lot of troubles with optimizing models without errors etc. ultimately I found that generally if I just very carefully downloaded from hugging face and optimized in an exact way it seemed to mostly work. But you can definitely try!
@Kudoxh
@Kudoxh 9 месяцев назад
So do I always need to download models from huggingface AND into the Onnx folder? does it also work if i would simply download a model and placed it into model/stable-diffussion? I'm kinda new to this sry if it seems like a dumb question. edit: another question, to start the webui, is it required to start it via anaconda using the "...\ webui.bat --onnx --backend directml" or can i simply start it with clicking on the webui-user batch file and if so I probably need to add the --onnx --backend directml into the arguments section...?
@FE-Engineer
@FE-Engineer 9 месяцев назад
Anaconda is required if that is how you set it up (that is how I did it in my video because it significantly reduces problems and provides consistency, plus if you do not use it, and make any type of mistake, it is time consuming to try and fix any of these mistakes) You can download models from just about anywhere. Not every model works 100% of the time due to different ways that people configure and encode some models. See the pinned message on this video to get a better idea of how to convert models from civitai for example.
@jakeblargh
@jakeblargh 9 месяцев назад
How do I optimize safetensors models I've downloaded from CivitAI using this new WebUI?
@FE-Engineer
@FE-Engineer 9 месяцев назад
How to convert civitai models to ONNX! AMD GPU's on windows can use tons of SD models! ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-cDrirEtmEqY.html
@LeLeader00
@LeLeader00 10 месяцев назад
What does it mean 😢 OSError: Cannot load model DreamShaper: model is not cached locally and an error occured while trying to fetch metadata from the Hub. Please check out the root cause in the stacktrace above.
@FE-Engineer
@FE-Engineer 10 месяцев назад
It seems to not be able to load the model. Did you download it. Then optimize it?
@Korodarn
@Korodarn 8 месяцев назад
I have a 3080 with 10GB ram, I've been wanting a 24 GB ram card, in perspective of those here, is this whole process worth it as an upgrade from a 3080 to a 7900XTX? I'm considering saving and going for the 4090 but it's over twice the cost, especially if I go with real costs of available units including used parts off marketplace, etc. I am a Linux user by default, but I do play VR games in windows 11 because VR is a little weak on windows and with UEVR making a lot of unreal engine games open on windows in VR, I'm thinking I"ll be spending some more time that way. So I'd ideally want to be able to use ComfyUI, Automatic1111, text-generation-webui and SillyTavern all within both places equally well like I can today, where I dual boot EndeavourOS and Windows 11.
@FE-Engineer
@FE-Engineer 8 месяцев назад
Is endeavor Linux based? Sorry I’m not familiar with it? Having a 3080. I would say upgrading to a 7900xtx will not be worth it for you unless you plan on doing AI stuff on Linux. Or…wait for ROCm to be finished working on windows. Once we get ROCm on windows yes it will be worth it if your goal is to have 24gb of vram. But at the moment no. Changing to AMD to run AI on windows is not going to be worth it. Not yet at least.
@Justin141-w3k
@Justin141-w3k 8 месяцев назад
9:07 I get the Torch is not able to use GPU error here.
@FE-Engineer
@FE-Engineer 8 месяцев назад
Check the newer video. Details in the video description. Like first line…
@Justin141-w3k
@Justin141-w3k 8 месяцев назад
Isn't SHARK garbage? @@FE-Engineer
@FE-Engineer
@FE-Engineer 8 месяцев назад
Shark is…not awesome in my opinion. But the code changes and fixes for automatic1111 directml…there’s a long comment about it and a link to the video showing how to fix errors that come up in the video description.
@phelix88
@phelix88 10 месяцев назад
Anyone know if training isnt supposed to work on this version? I get an error immediately after it finishes preparing the dataset...
@FE-Engineer
@FE-Engineer 10 месяцев назад
I don’t know for sure. On the directml version there are a lot of things that do not work appropriately. You can also do the swap to move over to Linux where and has the Rocm drivers that work. Or you can wait another 2-3 months or so until amd finishes hopefully getting Rocm ported over to windows.
@PCproffesorx
@PCproffesorx 8 месяцев назад
I have an nvidia GPU, but have still looked into ONNX my main problem with it is that it doesnt have lora support yet. You have to merge the lora's into your model first. If lora's are every properly supported with the ONNX format I would switch immediately.
@FE-Engineer
@FE-Engineer 8 месяцев назад
Interesting take. My understanding of the underlying differences between the formats is pretty limited. So it is definitely curious to me to find out that ONNX while lacking some almost rudimentary functionality is that appealing. I’ll have to find time to dig in a bit more when I have some time.
@KamiMountainMan
@KamiMountainMan 8 месяцев назад
I installed on a laptop that has both AMD integrated and AMD dedicated GPUs. It automatically picks the integrated one which is much slower. Do you know by any chance how to set the right GPU for it?
@FE-Engineer
@FE-Engineer 8 месяцев назад
On my desktop I have both integrated and discrete. For me it always uses the right one. I don’t know if I have seen any way to try and alter or change its functionality in how it picks. I assume it just chooses the one with the most vram. But off the top of my head. No I do not know. Sorry. :-/
@Sujal-ow7cj
@Sujal-ow7cj 3 месяца назад
Will it work on 6000 series
@4MERSAT
@4MERSAT 10 месяцев назад
Why can't I change the image size in the Optimize tab? I can only select 512.
@FE-Engineer
@FE-Engineer 10 месяцев назад
You do not need to change it in that tab. The models you are optimizing are most likely trained on 512x512 anyway.
@TanMan07
@TanMan07 9 месяцев назад
So I was able to get this going... but now how would I run again without having to redo steps? New to all of this - I would like to have a shortcut I could just click on in my desktop to run all this
@FE-Engineer
@FE-Engineer 9 месяцев назад
Check my other videos for the one talking about activating conda and running sd in one script.
@FranciscoSalazar-qi4mw
@FranciscoSalazar-qi4mw 7 месяцев назад
Me sale un error en el que : -- onnx no es reconocido
@FE-Engineer
@FE-Engineer 7 месяцев назад
Remove -onnx. Code changed again
@ponyplower5963
@ponyplower5963 9 месяцев назад
Appreciate the video! Is there anyway to use models not on huggingface? Maybe use a model that was installed via a MEGA link or CivitAI?
@FE-Engineer
@FE-Engineer 9 месяцев назад
I’ve tested and run civitai models that I know work.
@FE-Engineer
@FE-Engineer 9 месяцев назад
Look here. How to convert civitai models to ONNX! AMD GPU's on windows can use tons of SD models! ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-cDrirEtmEqY.html
@Placeholder2-ku2wj
@Placeholder2-ku2wj 10 месяцев назад
running native through ROCm is still better. all these shamanistic rituals with model conversion are just hilarious workarounds
@FE-Engineer
@FE-Engineer 10 месяцев назад
Yes. I agree. That’s why I have guides showing how to do that. However a lot of people don’t want to run Linux or dual boot machines. And ROCm is not available on windows yet. So here we are…
@Slewed
@Slewed 9 месяцев назад
when I try to optimize the model it says ERROR:onnxruntime.transformers.optimizer:There is no gpu for onnxruntime to do optimization.
@FE-Engineer
@FE-Engineer 9 месяцев назад
The multiple comments below on this exactly. That’s normal.
@Slewed
@Slewed 9 месяцев назад
It doesn't work it says error on the website before it finishes optimizing the model @@FE-Engineer
@TheTornado73
@TheTornado73 9 месяцев назад
Open mini conda command prompt ---- run as administrator, otherwise it will give an error
@FE-Engineer
@FE-Engineer 9 месяцев назад
Perhaps. It might also be dependent if the user running is already an administrator.
@MrLight85
@MrLight85 6 месяцев назад
What is AMD GPU do you use in video?
@FE-Engineer
@FE-Engineer 6 месяцев назад
7900 xtx.
@leeyong414
@leeyong414 9 месяцев назад
hi, so after the "socket_options" error, I followed the venv\Scripts\activate and pasted the code but still get the same error. what am i doing wrong?
@FE-Engineer
@FE-Engineer 9 месяцев назад
Wrong version of httpx is installed.
@FE-Engineer
@FE-Engineer 9 месяцев назад
Change the version to I think it is 0.24.1 you can change it in the requirements.txt file. But the directions in the video do work. So you must have something else going on or you are not in a conda environment correctly. Or wrong version of python. Lots of ways to have things go sideways. Have to follow the directions closely.
@peppernickelly
@peppernickelly 10 месяцев назад
Now if someone could do it with two AMD GPUs would really get some attention.
@FE-Engineer
@FE-Engineer 10 месяцев назад
Wouldn’t that be fantastic! :)
@peppernickelly
@peppernickelly 10 месяцев назад
@@FE-Engineer I see a lot of people asking for a generator to use two or more GPUs. I just need an instance per GPU that I have. I think I found a way but I have to code it myself, it'll probably take weeks but that's okay
@팟-i1r
@팟-i1r 9 месяцев назад
When I click (Optimize model using Olive) I get this error "AttributeError: 'NoneType' object has no attribute 'lowvram'"
@FE-Engineer
@FE-Engineer 9 месяцев назад
Yep. Something broke on it in the last few days. Trying to figure out what’s broken
@팟-i1r
@팟-i1r 9 месяцев назад
​@@FE-Engineer thanks❤, I bypassed the other error regarding "this gpu can't use torch" or smth by adding (-- skip cuda check) to my .bat file. however I am completely stumped on this new Lowvram error, tried adding either "--lowvram" or "--midvram" to my .bat file but didn't help. btw, there is a newer article on the amd website, you still need an optimzed model, but you no longer run the onnx backend thing. Now it's the same interface and features like Nivida, DPM++ and everything, but since I can't get the optimized model to work, it's using my cpu instead.
@FE-Engineer
@FE-Engineer 9 месяцев назад
new video showing how to get it working!
@DoomDeer
@DoomDeer 6 месяцев назад
It would be kick4$$ if you could do a comparison on the XTX between windows with Microsoft Olive Toolset, and SD running on Linux with only ROCm! Want to know what runs better on my 7900XT.
@FE-Engineer
@FE-Engineer 6 месяцев назад
Onnx/olive > linux full ROCm > windows ROCm zluda > windows directml no onnx. But onnx has a lot of limitations. From my testing that is what I have seen. For a windows user I would recommend using zluda due to the onnx issues. For Linux I would just use full ROCm.
@DoomDeer
@DoomDeer 6 месяцев назад
@@FE-Engineer I'm buying an SSD to use MSOlive on windows (my current ssd is 512gb rip) but meanwhile , I am installing a Linux Distro to use Full ROCm. Do you have a guide for installing SD on there? I am gonna use the automatic1111 guide to install it on ARCH LINUX.
@7sonderling
@7sonderling 7 месяцев назад
thanks, i followed everything step by step, but it's all for shit with me... fix is not working? the error messages are "fatal: No names found, cannot describe anything" and "RuntimeError: Torch is not able to use GPU; add --skip-torch-cuda-test ..." exactly the same shit as in pinokio... that's exactly what I don't want to do, because then only the CPU would be used. Still AMD not working... i give up...
@FE-Engineer
@FE-Engineer 7 месяцев назад
I have a new video coming out with a much better way of doing this. Annoying setup but works much better!
@7sonderling
@7sonderling 6 месяцев назад
@@FE-Engineer great! i would love to try it out!
@deruzym84
@deruzym84 8 месяцев назад
it uses my cpu instead my gpu. im running windows 10 on my rx 6700xt. how to fix this? i got around 24.5 it/s with my ryzen 5 5600 which was devastating
@FE-Engineer
@FE-Engineer 8 месяцев назад
I am going to assume you meant seconds per iteration as 25 it/s is really fast. But basically watch my newer video about installing SD with the fixes for automatic 1111 to install and use directML with ONNX. Or you can install rocm on Linux and go that route. And then there is shark and comfyui. Depends on what you wanna do really.
@deruzym84
@deruzym84 8 месяцев назад
@@FE-Engineer i have watch ur videos about using directml with ONNX. it give me error with model not found even though i place 2 of my own sofetensor model file there
@FE-Engineer
@FE-Engineer 8 месяцев назад
If you are running on windows, you have to convert safetensors to ONNX...If you are running ONNX you can not use a safetensors model until you convert it to ONNX first. I have videos showing how to do this. I keep these videos separate from eachother so that people can watch the ones that really apply and not be stuck watching stuff they do not care about. Sorry took me a while to respond.
@deruzym84
@deruzym84 8 месяцев назад
@@FE-Engineer oh that explain my problem with sofetensor file. tq
@deviwaazaa
@deviwaazaa 9 месяцев назад
Hey there, what if I already have some models downloaded? How would I "optimize" these?
@FE-Engineer
@FE-Engineer 9 месяцев назад
Check out my video about converting civitai models. It explains exactly what you are looking for.
@Robert306gti
@Robert306gti 8 месяцев назад
Swede here with some problem. I just installed the latest Miniconda because I thought that was what I'm supposed to do but got an error that in the end that it wanted 3.10.6 but I can't find an installer for this. The only one I find is for python. Am I doing something wrong?
@FE-Engineer
@FE-Engineer 8 месяцев назад
I run it on python 3.10.6. I’m fairly sure that the 3.10.6 is regarding python version.
@FE-Engineer
@FE-Engineer 8 месяцев назад
And for the record. I stopped using mini conda mostly.
@Istock5
@Istock5 9 месяцев назад
Nice guide! How or what file do I run to open stable diffusion back up?
@FE-Engineer
@FE-Engineer 9 месяцев назад
So you can run through the same mini conda stuff as before, or you can watch my video about making a quick script that will launch it for you. Then you can just run that single batch file from mini conda and it will do it all.
@Istock5
@Istock5 9 месяцев назад
Thanks!
@void-qy4ov
@void-qy4ov 10 месяцев назад
great tutorial !! thanks Finally it is working. ! Can you please show how to use models from civitai ? and is it possible to use SDXL with this method ?
@FE-Engineer
@FE-Engineer 10 месяцев назад
I tried and was unable to get sdxl working despite trying several different things. Also the refiner simply did not even seem to run at all.
@FE-Engineer
@FE-Engineer 10 месяцев назад
And absolutely. I’ll try to have a video up about using a model from civitai in the next few days
@thelaughingmanofficial
@thelaughingmanofficial 10 месяцев назад
I have to use the WebUI from explorer otherwise it doesn't install. if I try from Miniconda I get an error about "couldn't launch python" when using the --onnx --backend directml options. Rather frustrating.
@FE-Engineer
@FE-Engineer 10 месяцев назад
Sounds like python problems. Like multiple versions of python potentially or not checking the add to path option when installing. Hard to say for sure though. :-/ sorry that is irritating.
@thelaughingmanofficial
@thelaughingmanofficial 10 месяцев назад
@@FE-Engineer I only have one version of python installed and it's 3.10.6 because that's the only version it seems to work with
@KasperJuul87
@KasperJuul87 9 месяцев назад
First of all. Thanks for a great video. Im stuck in the model opimization. Its been about an hour now with my computer freezing completely. Is this to expected?
@FE-Engineer
@FE-Engineer 9 месяцев назад
Sure. I think if it has been going an hour with no changes I would think something is likely wrong. In your UI are those little orange boxes still spinning and rotating?
@KasperJuul87
@KasperJuul87 9 месяцев назад
@@FE-Engineer when i unfreezes they are spinning. So something must be going on.
@ZecVitaly
@ZecVitaly 9 месяцев назад
My PC keeps shutting off when im optimizing the model using olive @ 15:06. Any fix for this?
@FE-Engineer
@FE-Engineer 9 месяцев назад
You might be running out of actual ram. How much ram is on your machine?
@ZecVitaly
@ZecVitaly 9 месяцев назад
32GBs DDR4@@FE-Engineer
@yodashi5
@yodashi5 10 месяцев назад
It worked, thanks. Now there is another problem, i dont know how to open it again. Every time i use webui use bat i dont have olive window. Could you help?
@FE-Engineer
@FE-Engineer 10 месяцев назад
Take a look at this. Automatically activate conda and run your SD from one bat file! Super easy! ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-vKIqd5FDLn0.html
@wwk279
@wwk279 10 месяцев назад
Do controlnet, extensions such as adetailer, roop, reactor,...work well on this amd SD webui version. Do you get crashing sometime when generating picture?
@FE-Engineer
@FE-Engineer 10 месяцев назад
Wow, thanks for the tip. I’ll have to try them out. I only get crashes when I put in silly values. If I try to resize by 4x. It will crash on me. But if I am generating 512x512 I can do it endlessly for hours without crashes or out of memory etc. so generally no crashes except when I do something kind of silly.
@wwk279
@wwk279 10 месяцев назад
I think it's not the right time for me to upgrade to the RX 7900 XT yet. Your video was great! Keep up the good work. I'm looking forward to your next videos.
@FE-Engineer
@FE-Engineer 10 месяцев назад
Oof. I spent like 5 hours fiddling with control net. Got the right extension. Downloaded models put them in the right places. I got previews to work. But at the end of the day it just ignores them for me. You using control net 1.1? Tried using open pose, canny, and numerous others. But at the end of the day it never actually applied them when generating images. And I played with virtually every slider and setting. Weight, preference, in painting, txt to img. IMG to IMG etc.
@tomaslindholm9780
@tomaslindholm9780 9 месяцев назад
Seems like "Torch is not able to use GPU" is another common issue not resolved. Relates to Cuda version.
@FE-Engineer
@FE-Engineer 9 месяцев назад
Perhaps. I’ve been out and about with family and saw this crop up in the last few days. I have to dig in and see if I can figure out what’s going on with it.
@FE-Engineer
@FE-Engineer 9 месяцев назад
new video shows how to get it working
@tomaslindholm9780
@tomaslindholm9780 9 месяцев назад
Sooo much appreciated! I dig in, right now and initially confirmed, my miniconda enviroment was set up with pyhton 3.10.13. Guess I got it updated by accident before the failed attempt to build SD. @@FE-Engineer
@FE-Engineer
@FE-Engineer 9 месяцев назад
I ditched miniconda in favor of reduced complexity this time. I like anaconda. But for this. I wanted to just use default stuff and cut out as much spaghetti as possible.
Далее
AMD's Hidden $100 Stable Diffusion Beast!
9:23
Просмотров 94 тыс.
Stable Diffusion - img2img - ПОЛНОСТЬЮ
46:35
Просмотров 226 тыс.
Help Me Celebrate! 😍🙏
00:35
Просмотров 13 млн
Brilliant Budget-Friendly Tips for Car Painting!
00:28
NEVER install these programs on your PC... EVER!!!
19:26
How to Crack Software (Reverse Engineering)
16:16
Просмотров 558 тыс.
Help Me Celebrate! 😍🙏
00:35
Просмотров 13 млн