Тёмный
No video :(

How to fix Automatic1111 DirectML on AMD 12/2023! Fix broken stable diffusion setup for ONNX/Olive 

FE-Engineer
Подписаться 2,9 тыс.
Просмотров 32 тыс.
50% 1

Update March 2024 -- better way to do this
• March 2024 - Stable Di...
Currently if you try to install Automatic1111 and are using the DirectML fork for AMD GPU's, you will get several errors. This show how to get around the broken pieces and be able to use Automatic1111 again.
Install Git for windows:
gitforwindows.org/
Install Python 3.10.6 for windows:
www.python.org/downloads/rele...
be sure to add to path!
Clone automatic1111 Directml:
copy url for .git repo
github.com/lshqqytiger/stable...
run automatic1111 to create virtual environment
run webui-user.bat file -- it will give an error
fix errors:
venv\Scripts\activate
pip install -r requirements.txt
pip install httpx==0.24.1
edit webui-user.bat file inside of automatic1111 folder and add command line arguments and save:
--use-directml --onnx
Inside of automatic1111 folder
find modules\sd_models.py file, edit it
comment out lines 632 - 635 by putting a # in front of the lines and save file
close out Automatic1111
Now you can run Automatic1111 by double-clicking on the webui-user.bat file from windows, or make a shortcut to it if you prefer.
Automatic1111 should now work the way it used to and should allow optimizing ONNX models.

Опубликовано:

 

25 дек 2023

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 694   
@scronk3627
@scronk3627 7 месяцев назад
Thanks for this! I ended up not having to comment out the lines in the last step, the optimization worked without it
@FE-Engineer
@FE-Engineer 7 месяцев назад
You are very welcome! And that is awesome. I’m seeing mixed comments about it. Some people still run into it. Others seem to not run into it. Probably differences of what code people have pulled. But I’m glad it worked for you and you didn’t have to put in that hacky fix. Thank you for watching!
@PhilsHarmony
@PhilsHarmony 7 месяцев назад
Thanks so much for this video, much appreciated! Finally a tutorial that actually got me past the "Torch is not able to use GPU" error. For programmers that might all be easy and self-explanatory, for everyone else it's a real hustle to stand in front of these errors that tell us nothing if we don't speak code. What I cannot wrap my mind around is why a multi-billion dollar company like AMD doesn't attach a fix like this at the bottom of their stablediffusion tutorial. They must be aware there's issues for many users during install. Anyways, we luckily got helpers like FE-Engineer.
@FE-Engineer
@FE-Engineer 7 месяцев назад
You are very welcome! Thank you for the kind words and support on RU-vid! I am hoping to be able to one day have a working relationship with AMD to be able to help folks even better with AI things as software and changes occur in the fast moving world of AI. Maybe one day? :)
@MrRyusuzaku
@MrRyusuzaku 6 месяцев назад
Tbh even programmers might not be able to get it at a go. Especially if Python is not their thing. One here tho I had a tiny clue, but this video helps a lot
@kampkrieger
@kampkrieger 6 месяцев назад
@@MrRyusuzaku even if python is their thing, you don't just know how this is supposed to work. I get the error that it can not find venv/lib/site-packages/pip-22.2.1-dist-info/metadata, i have no folder site-packaged and I don't know what it is or where it comes from
@chris99171
@chris99171 7 месяцев назад
Thank you @FE-Engineer for taking the time to make this tutorial. It helped!
@FE-Engineer
@FE-Engineer 7 месяцев назад
Glad that it helped! Thank you for watching and supporting my work. It means the world to me!
@EscaExcel
@EscaExcel 7 месяцев назад
Thanks, this was really helpful it was hard to find a tutorial that actually gets rid of the torch problem.
@FE-Engineer
@FE-Engineer 7 месяцев назад
Glad this helped and worked! I agree. It’s difficult to find good information and things that actually work.
@ml-qq5ek
@ml-qq5ek 5 месяцев назад
Just found out about olive/onnx, Thanks for the easy to follow guide, unfortunately it doesn't work anymore. Will be looking forward to see the updated guide.
@dangerousdavid8535
@dangerousdavid8535 7 месяцев назад
You're a life saver i couldnt get the onnx optimization to work but now its all good thanks!
@FE-Engineer
@FE-Engineer 7 месяцев назад
Yea. I suddenly started getting a lot of comments about things being broken. So as soon as I really could dig in and figure out how to at least get people up and running I tried to get something to help people get stuff at least with a shot of working for now.
@lurkmoar4
@lurkmoar4 7 месяцев назад
Thanks for the tutorial, it's the best one I've seen so far and everything works great
@FE-Engineer
@FE-Engineer 7 месяцев назад
You are welcome. The code changed a few days ago and most peoples stuff broke. And depending on what you had it could be fixed several ways. But this seemed the most bulletproof to make a video saying do this and it should work.
@patdrige
@patdrige 7 месяцев назад
you Sir are the MVP. You not only showed how to install but also showed how to trobule shoot errors step by step. Thanks
@FE-Engineer
@FE-Engineer 7 месяцев назад
You are welcome! I’m glad it helped. Thank you for watching!
@patdrige
@patdrige 7 месяцев назад
@@FE-Engineer do you have a guide or plan to have a guide for text2text AI for AMD ?
@joncrepeau3510
@joncrepeau3510 7 месяцев назад
This is the only way with windows and an amd gpu. Other tutorials get stable diffusion running, but it is only on the cpu. I was seriously about to give up hope until i watched this. Thank you
@FE-Engineer
@FE-Engineer 7 месяцев назад
Glad it worked for you and you were able to get up and running! Thanks for watching!
@xCROWNxB00GEY
@xCROWNxB00GEY 7 месяцев назад
you are honestly my hero. I am still getting alot of wierd errors but everthing is working.
@FE-Engineer
@FE-Engineer 7 месяцев назад
Yea. I mean. Fair warning. This literally disables some logic for lowvram flag. Like for real. Stuff could break. But maybe some things potentially breaking seems better than “well it straight won’t work” 😂
@xCROWNxB00GEY
@xCROWNxB00GEY 7 месяцев назад
I do prefer it running with constant warnings instead of errors which prevent me from running it. Do you still use it this way or are you using an alternative? I just started with AI Image and could use any input. But because I have an 7900XTX I feel like there are no options.@@FE-Engineer
@lenoirx
@lenoirx 6 месяцев назад
Thanks! After 3 days of trying workarounds, this guide finally worked out!
@FE-Engineer
@FE-Engineer 6 месяцев назад
Yea the changes they made really kind of were irritating and while they are documented. A lot of people didn’t really see how to fix it easily.
@le_crispy
@le_crispy 6 месяцев назад
I never comment on videos, but you fixed my issue of stable diffusion of not using my GPU. I love you.
@FE-Engineer
@FE-Engineer 6 месяцев назад
I’m glad it helped and fixed your problems! Thank you so much for watching!
@user-ni7gv2ty2o
@user-ni7gv2ty2o 7 месяцев назад
Thank you! After 2 days of struggling the problem is gone!
@FE-Engineer
@FE-Engineer 7 месяцев назад
I’m glad it helped! Thank you for watching!
@yannbarral7242
@yannbarral7242 7 месяцев назад
Super helpful, thanks a lot!! The --use-directml in COMMAND ARGS was what I was missing for so long. You helped a lot here. If it can help others with random errors during installation and 'Exit with code 1' , what worked for me was turning off the antivirus for an hour.
@FE-Engineer
@FE-Engineer 7 месяцев назад
Interesting about the antivirus. Which antivirus do you use? Glad this helped. Most folks could probably just swap their command line arguments to -use-directML and it would probably work. Unfortunately when I make a video in order to avoid a mountain of “doesn’t work” comments I try to balance between what will fix it for most folks and I try hard to include information that should fix it entirely for 99.99% of folks. And of course. People have different code from different points in time, different systems, different python versions etc. so I try hard to make sure that if nothing else. If you blow away and start over. This should work and fix your problems. Hence why even when a video could be like 1 minute with 1 small change. It can easily become 10+ minutes with the handful of “and if you happen to see this…” pieces. :-/ it is a difficult balancing act.
@FE-Engineer
@FE-Engineer 7 месяцев назад
Thank you for the kind words, I am glad this helped you. Thank you for watching!
@zengrath
@zengrath 6 месяцев назад
Dude, you have no idea how long i been trying to get automatic on windows with my 7900xtx and conclusion always has been use linux from everywhere I go. but I seen AMD's post about how it works with windows with olive yet it wouldn't work for me and tried for hours. Your video finally got it working for me. The key part for me was not using the skip cuda command, nothing anywhere i've seen had showed me how to proper fix this until your video. I funny enough didn't have some of errors you did after that but maybe they updated some things since this video or i already installed some of those things already, not sure. thank you so much. I been using Shark and it's such a pain to use, every model change, every resolution change, requires recompiling, every lora and so on, it's a nightmare and it doesn't appear to have as many options as automatic. I hear that we still can't do lora training and all but hopefully that comes later.
@FE-Engineer
@FE-Engineer 6 месяцев назад
Yea. Honestly. I love that shark kinda just works. But I can not stand using it. It takes forever. If you want to just load a model keep an image size and just generate image after image it’s ok. But if you wanna jump around, change models, change images sizes. Then shark is crazy slow. You are very welcome! I’m glad you got it working, thank you so much for watching!
@zengrath
@zengrath 6 месяцев назад
@@FE-Engineer I actually switched to comfyUI also thanks to your other video and while it may be a little slower, it's still good enough for 7900xtx and inpainting, img to img, lora's, and all that works which didn't on the automatic one. So much better for me then automatic on windows so far. but hoping it improves even more, i noticed some plugins not working when following a tutorial but at least basics work.
@rikaa7056
@rikaa7056 7 месяцев назад
thank you man all other tutorials on youtube was useless. CPU was at 99% now you fixed my gpu rx 6600xt is doing the heavy lifting
@FE-Engineer
@FE-Engineer 7 месяцев назад
Nice! Glad it helped! Thank you for watching!
@tomaslindholm9780
@tomaslindholm9780 7 месяцев назад
You were quick in some parts, but the "entire" server restart (terminate batch job Y/N) just hit Ctrl C Thank you so much for this fix the guide fix guide. Hero!
@FE-Engineer
@FE-Engineer 7 месяцев назад
😂😂 I was not going to make a video. But I decided to start from scratch and figure out all the trouble spots and I was like…mmmm…I’ll get too many comments about people having weird troubles and it’s hard to explain some of it over text. And yea. I try not to go too fast but I also try to avoid pointlessly lingering. I tend to record and get a bit too in depth and off topic and in editing I usually cut most of that out. Just the way I naturally talk versus the cleanest way to really do a how to. It’s a process. Plus I really am trying to get it down to more of a reflex and more natural for me to be able to do these without going too far off and also not going too fast. :-/
@tomaslindholm9780
@tomaslindholm9780 7 месяцев назад
Well, as a former system engineer I understand you must have a great deal of confidence to do what you did, considering the promising title of your video. Brave and good! Thank you for sharing your skill to the rest of us kamikaze engineers. (BTW, its inside a VM, just make it or break it seems like a good approach) @@FE-Engineer
@Thomas_Leo
@Thomas_Leo 6 месяцев назад
Thank you so much! This was the only video that helped me. Liked and subscribed. 👌
@FE-Engineer
@FE-Engineer 6 месяцев назад
I’m glad this helped! Thank you so much for your support!
@orestogams
@orestogams 7 месяцев назад
Thank you so much, could not get this maze to work otherwise!
@FE-Engineer
@FE-Engineer 7 месяцев назад
You are welcome! Glad it helped! Thanks for watching and supporting my work!
@user-kj5ux9ms6q
@user-kj5ux9ms6q 6 месяцев назад
Thank you so much. You have helped so many people with this video!
@FE-Engineer
@FE-Engineer 6 месяцев назад
I’m glad it helped you!! Thanks so much for watching!
@MasterCog999
@MasterCog999 7 месяцев назад
This guide worked great, thank you!
@FE-Engineer
@FE-Engineer 7 месяцев назад
You are welcome! Thank you for watching!
@Djangots
@Djangots Месяц назад
Many thanks ! Your guide was very helpful with just the first 10 minutes
@pack9694
@pack9694 6 месяцев назад
thank you for helping me fix the olive issue you are amazing
@FE-Engineer
@FE-Engineer 6 месяцев назад
I’m glad this helped! Thank you so much for watching!
@jordan.ellis.hunter
@jordan.ellis.hunter 6 месяцев назад
This helped a lot to get it running. Thanks!
@FE-Engineer
@FE-Engineer 6 месяцев назад
You are very welcome! Thank you so much for watching. Glad it helped!
@nourel-deenel-gebaly3722
@nourel-deenel-gebaly3722 5 месяцев назад
Thanks a lot for the tutorial, it worked but without the onnx stuff unfortunately, patiently waiting for your new video on this matter.
@FE-Engineer
@FE-Engineer 5 месяцев назад
It’s so much better too!
@FE-Engineer
@FE-Engineer 5 месяцев назад
Sorry about the wait though. Sick daughter. Sick son. Surgery for son. Hospitalization for son. It’s…busy. Plus work and life and all that. Still I do apologize whole heartedly for the wait.
@nourel-deenel-gebaly3722
@nourel-deenel-gebaly3722 5 месяцев назад
@@FE-Engineer no need to apologize you're literally amazing, hope all goes well for you, although i'll still be using this old and slow method since the new video is for higher cards and I have more of a potato than a gpu 😅, but hopefully I upgrade soon and benefit from this ❤️
@DarkwaveAudio
@DarkwaveAudio 7 месяцев назад
Thanks man you helped a lot. much appreciated for your time and effort.
@FE-Engineer
@FE-Engineer 7 месяцев назад
You are welcome! Thanks so much for watching!
@Daxter250
@Daxter250 7 месяцев назад
that was... the best AND ONLY tutorial i found that worked. my 5700xt had no problems with stable diffusion half a year ago and then suddenly puff, some bs about tensor cores which i dont even have. all those wannabes on the internet simply said to delete venv and it will sort itself out. NO IT DOESNT. this tutorial here does! thanks for the work you put in! btw. with those onnx and olive models i even turned the speed from fucking seconds per iteration to 2 iterations per second O.o, while also increasing the image size!
@DGCEO_
@DGCEO_ 7 месяцев назад
I also have a 5700xt, just curious what it/s you are getting?
@Daxter250
@Daxter250 7 месяцев назад
@@DGCEO_ 2 it/s as written in the last sentence. image is 512x512.
@FE-Engineer
@FE-Engineer 7 месяцев назад
I’m glad this helped! Thank you so much for the kind words! :) and thank you for watching!
@NA-oe5jj
@NA-oe5jj 6 месяцев назад
you solved the exact problems i had. thanks for the true best tutorial.
@FE-Engineer
@FE-Engineer 6 месяцев назад
You are welcome, I am glad it helped! Thanks for watching
@NA-oe5jj
@NA-oe5jj 6 месяцев назад
@@FE-Engineer woke up today to it no longer working. why computers be like this. :D when i attempt to use webui-users it says installing requirements then *** could not load settings. then tries to launch anyway and starts to complain about Xformers and Cuda. i think this settings load is the issue. ima fiddle at lunch and then after work tonight, i will do a complete reinstall again using your handy guide.
@metaphysgaming7406
@metaphysgaming7406 6 месяцев назад
Thanks so much for this video, much appreciated!
@FE-Engineer
@FE-Engineer 6 месяцев назад
You are welcome I hope it helped! Thanks for watching!
@LeitordoRedditOficial
@LeitordoRedditOficial 5 месяцев назад
If you get the error: RuntimeError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check then add "--use-directml --reinstall-torch" to the COMMANDLINE_ARGS in the webui-user.bat file through notepad this way SD will run off your GPU instead of CPU. after use one time, remove --reinstall-torch, remember, is without " ". please share in more videos for help more people.
@TPkarov
@TPkarov 3 месяца назад
Obrigado amigo, você é um amigo !
@LeitordoRedditOficial
@LeitordoRedditOficial 3 месяца назад
@@TPkarov de nada amigo, sendo sincero com você, o melhor mesmo é gerar imagens 512x512 tenho uma rx 6800 xt e varias vezes quando ponho algo maior que isso, quando está em 99% dá erro, e esperei aquele tempo por nada kkkkkkkkk, mas se for da serie 7000 da amd pode dar certo com imagens maiores.
@miosznowak8738
@miosznowak8738 6 месяцев назад
Thats the only solution I found which actually works, thanks :))
@FE-Engineer
@FE-Engineer 6 месяцев назад
I’m glad it helped and got it running :). Thanks so much for watching!
@amGerard0
@amGerard0 6 месяцев назад
This is great! Thanks for the excellent video, I went from ~4s/it to ~2it/s on a 5700XT! so *much* faster!
@FE-Engineer
@FE-Engineer 6 месяцев назад
Yay! I’m glad it helped! Thanks so much for watching!
@sanchitwadehra
@sanchitwadehra 6 месяцев назад
my 6600xt went from 1.75 it/sec to 2 it/sec did you do something else could you please give me some recommendations on how you increased it so much
@amGerard0
@amGerard0 6 месяцев назад
@@sanchitwadehra Make sure you have no other versions of Python, only 3.10.6 When I had other versions it just didn't work, maybe if you have another version it's slowing it down? Other than that I'm not sure I only use: set COMMANDLINE_ARGS=--use-directml --onnx If you're using medvram or something, remove it and try again? Depending on the model it can be slower - if you're using a really big model that can affect it, certain sampling methods are faster than others too. Likewise, if you are trying to generate images bigger than 512x512 (i.e. 768x512) then it will struggle. Try another model and see if it's just that, then try every sampling method availible (about 5 worked for me, the others were a total artifact ridden mess).
@sanchitwadehra
@sanchitwadehra 6 месяцев назад
@@amGerard0 maybe it's the python version problem as my pc has latest python version and i installed a1111 using a conda environment with python 3.10.6 and i also have comfyui on my pc in a different conda environment with python 3.10.12 maybe i will try doing the whole process again by deleting everything from my pc thx for sharing
@Azure1Zero4
@Azure1Zero4 7 месяцев назад
Thanks a lot. Something to note is if you don't want onnx mode enabled just exclude it from the arguments.
@FE-Engineer
@FE-Engineer 7 месяцев назад
This is true. Removing ONNX allows the other samplers to be used. But for AMD users. The performance hit is a big one.
@Azure1Zero4
@Azure1Zero4 7 месяцев назад
That's true. When I try running ONNX converted models it wont let me adjust the size of the image for some reason and they don't seem to be producing results nearly as good as non-converted.@@FE-Engineer
@Azure1Zero4
@Azure1Zero4 7 месяцев назад
I think I might have figured out my issue. I think I'm maxing out my ram and its crashing the CMD prompt mid optimizing. Do you think you could do me a favor and tell me about how much system ram you use when going through the optimization process? Going to upgrade and need to know how much.@@FE-Engineer
@Azure1Zero4
@Azure1Zero4 6 месяцев назад
In case anyone need to know I required 32GB of ram to optimize models. So if you don't have that much your going to need to upgrade or download an already optimized model. Something I had to learn the hard way. Hope this helps someone.
@adognamedcat13
@adognamedcat13 5 месяцев назад
I was wondering if you could help me with interesting issue. So after following the steps, it kept telling me that the --onnx was an unknown argument. I heard somewhere that with the newest update onnx didn't need to be included as an argument. So I deleted it from the webui-user.bat args line. To my surprise the webui booted as normal, though, there was no sign of olive or predictably onnx. Now I'm getting around 1.5its/sec and I have the same exact card as you. on the plus side I have dmp++ 2M Karras now, and it does *technically* work, but the speeds are ridiculously slow. Thanks for any/all help and thanks a million for making this series, you're the man! Update: to clarify the error I get if I try to launch it the way you described is ' launch.py: error: unrecognized arguments: - '
@Vasolix
@Vasolix 5 месяцев назад
I have same error how to fix that ?
@FE-Engineer
@FE-Engineer 5 месяцев назад
Remove -onnx. They changed code. It is no longer necessary.
@williammendes119
@williammendes119 5 месяцев назад
@@FE-Engineer but when sd start we dont have Olive tab
@whothefislate
@whothefislate 5 месяцев назад
@@FE-Engineer but how do you get the onnx and olive tabs then?
@tomlinson4134
@tomlinson4134 5 месяцев назад
@@FE-Engineer I have the exact same issue. Do you know a fix?
@dr.bernhardlohn9104
@dr.bernhardlohn9104 6 месяцев назад
So cool, many, many thanks!
@FE-Engineer
@FE-Engineer 6 месяцев назад
Glad it helped! Thank you for watching!
@evilivy4044
@evilivy4044 7 месяцев назад
Great tutorial, thank you. How do you go about using "regular" models with the --onnx argument? Do I need to convert them, or should I look for and use only ONNX models?
@FE-Engineer
@FE-Engineer 7 месяцев назад
Have to convert them basically. Occasionally you can find some models in ONNX format but it is not really super common…
@lucianoanaquin4527
@lucianoanaquin4527 7 месяцев назад
Thanks for the amazing tutorial bro! I only have one question, watching other videos I noticed that they have more sampler options, what do I have to do to have them too?
@FE-Engineer
@FE-Engineer 7 месяцев назад
The other samplers don’t work in this version with onnx and directml. So options are. Run ROCm on Linux. Or wait for ROCm on windows when we can just use the normal automatic1111 without needing directML and onnx.
@nickraeyzej578
@nickraeyzej578 5 месяцев назад
This worked great in 12/2023. Latest automatic conversion changes simply do not work and end up corrupted at random. Even when it does work, it makes automatic conversions for every single switch you make to the image resolution. Is there a way to git clone the project version from when this method was perfectly fine, back when we had the ONNX/Olive conversion tab, and one conversion per safetensor covered all resolutions on it's own?
@ktoyaaaaaa
@ktoyaaaaaa 4 месяца назад
Thank you! it worked
@FE-Engineer
@FE-Engineer 4 месяца назад
:):) glad you got it working! Thank you for watching!
@davados1
@davados1 5 месяцев назад
Thank you for the tutorial. So I got the webui to load up but I don't have ONNX and Olive tab at the top just not there oddly. Would you know why has webui changed and removed it?
@on.the.contrary
@on.the.contrary 5 месяцев назад
hi, I did just as the video and I got this problem "launch.py: error: unrecognized arguments: --onnx". Anyone got and fixed this?
@CANDLEFIELDS
@CANDLEFIELDS 5 месяцев назад
Been reading all comments for the past half hour...somewhere above FE-Engineer says that it is not needed and you should delete it....I quote it: Remove -onnx. They changed code. It is no longer necessary.
@nangelov
@nangelov 5 месяцев назад
@@CANDLEFIELDS if I remove --onnx, I no longer have the onnx and olive tabs and can't optimize the models
@ca4999
@ca4999 5 месяцев назад
@@nangelovSame problem sadly.
@nangelov
@nangelov 5 месяцев назад
@@ca4999 I surrendered and decided to buy an used 3090. There are plenty available in Europe for about 600 Euro and it is like 30 times faster, if not more.
@ca4999
@ca4999 5 месяцев назад
@@nangelovThe sad thing is, I got it somehow to work after 5 hours of work just to realize that the hires fix doesnt work currently with ONNX. Should've went the linux route from the beginning. Thats a very solid price for a 3090, congrats ^^ Just out of curiosity because I'm also located in Europe, where exactly did you buy it?
@GabiVegas-dj
@GabiVegas-dj 5 месяцев назад
Thanks man
@Meatbix75
@Meatbix75 7 месяцев назад
thanks for the tutorial. It certainly got SD working for me, which is excellent. however the Olive optimisation doesn't seem to have any effect. I could run the optimisation even without modifying sd_models but it made no difference to performance- I'm getting around 3.3 it/s with either the standard or optimised checkpoint. I've gone ahead and modified sd_models but to no effect. GPU is an RX6700 10GB. CPU is i5 12400F, 32GB RAM.
@FE-Engineer
@FE-Engineer 7 месяцев назад
Hard to say. I’ve found a lot of issues with the optimization. It’s tricky to even get it to work a lot of the time. But if you aren’t seeing any performance increase with it running then my guess is that the model is optimized. If you grab other models you might end up seeing the performance boost. It just probably is that the one you have is already optimized. You are welcome, thank you so much for watching. Sorry I don’t have a better answer to this.
@Grendel430
@Grendel430 6 месяцев назад
Thank you!
@FE-Engineer
@FE-Engineer 6 месяцев назад
No problem! Thanks for watching!
@Doomedjustice
@Doomedjustice 6 месяцев назад
Hello! Thank you very much for the tutorial, it really helped. I wanted to ask is there any way to use generic sampling methods that are usual for Automatic1111?
@FE-Engineer
@FE-Engineer 6 месяцев назад
You have to drop ONNX. But you will take a big performance hit. Or use ROCm on Linux.
@nienienie7567
@nienienie7567 3 месяца назад
Hey man! Great tutorial! Got any ideas for VRAM Usage optimazitation on AMD? I'm using a modified BAT like below: set PYTHON= set GIT= set VENV_DIR= set PYTORCH_CUDA_ALLOC_CONF=garbage_collection_threshold:0.6,max_split_size_mb:128 set COMMANDLINE_ARGS=--use-directml --medvram --always-batch-cond-uncond --precision full --no-half --opt-split-attention --opt-sub-quad-attention --sub-quad-q-chunk-size 512 --sub-quad-kv-chunk-size 512 --sub-quad-chunk-threshold 80 --disable-nan-check --use-cpu interrogate gfpgan codeformer --upcast-sampling --autolaunch --api set SAFETENSORS_FAST_GPU=1 it helps a lot but i still wanna squeeze out more, I'm using RX 7600 8gb vram, 32gb ram
@arcadiandecay1654
@arcadiandecay1654 6 месяцев назад
This has been a lifesaver, thanks! One thing I did notice after I got this working (perfectly, actually) is that there are some sampling methods missing, like DPM++ SDE Karras. Do you know if that's that something that could be manually installed? I tried doing a git clone of the k-diffusion repo and doing a git pull but that didn't get them to show up.
@FE-Engineer
@FE-Engineer 6 месяцев назад
Yea. They don’t work with ONNX. :-/
@arcadiandecay1654
@arcadiandecay1654 6 месяцев назад
Oof lol. Thanks! Well, I'm going to count my blessings, since I was floundering before finding this tutorial. I have Linux on a couple other disks and one of them is Ubuntu, so I'm going to install it on that, too.
@sanchitwadehra
@sanchitwadehra 6 месяцев назад
wow thanks dhanyavad
@FE-Engineer
@FE-Engineer 6 месяцев назад
You are very welcome! Thanks so much for watching!
@michaelbuzbee5123
@michaelbuzbee5123 7 месяцев назад
I was having trouble with my A1111 being slow so searching around I found your fix video and decided to do just a clean install. I already downloaded a bunch of models though, how does one run them through onnx? And I am assuming I can no longer just add the models to the stable diffusion folders anymore? I think my PC specs are the same as yours.
@FE-Engineer
@FE-Engineer 7 месяцев назад
So you need to optimize them for Olive and ONNX. I have a pretty short video about this. You should be able to just optimize them from your normal models folder. Once optimized they will be in onnx or olive-cache I think are the folder names. But yes you can use them. Just not SDXL models. I have yet to get SDXL to work correctly with directML and ONNX. :-/
@aadilpatel6591
@aadilpatel6591 7 месяцев назад
Great guide. Thanks. What are the chances that we will be able to use reactor (face swap) or animatediff with this repo?
@FE-Engineer
@FE-Engineer 7 месяцев назад
You are welcome! Thank you for watching! My guess is not very good…most of the extensions don’t play well with ONNX and directml. Plus my guess is that no one is really working on trying to get them to work with ONNX and directml really. :-/ You can always try. I just have had very little luck with very many extensions that like “do things”.
@aadilpatel6591
@aadilpatel6591 7 месяцев назад
@@FE-Engineer will they be usable once ROCm is ready for windows?
@amrkhaled5806
@amrkhaled5806 6 месяцев назад
Great Video. Finally, it works after three days of watching tutorials and searching the internet. I have a small issue though. When generating images it uses my iGPU instead of my AMD GPU, I've tried adding this argument --device-id 1 to the webui-user file, now it uses my AMD GPU however I've noticed in the task manager that it spikes to 100% for a second then it returns back to 0% then back to 100% and so on after that the AMD software pops up with a report an issue button and the image comes out grey. What causes this problem and how do I fix it? P.S. I have an AMD Radeon 530 GPU
@FE-Engineer
@FE-Engineer 6 месяцев назад
Might try some of the settings like medvram. Is it just the GPU that is spiking hard? It sounds like it is actually overloading the GPU and then the GPU is basically crashing. I have not encountered this personally. So it is hard for me to say for sure. But try some of the other vram settings and also potentially ram setting to see if that helps.
@mgwach
@mgwach 7 месяцев назад
Thanks!! Got everything up and running. Question though.... do you know if LoRAs are supposed to work with Olive yet?
@FE-Engineer
@FE-Engineer 7 месяцев назад
No idea. My guess would be no. And to be clear. I am 99% sure ONNX does not care but automatic1111 with directML is probably not setup to support it most likely.
@mgwach
@mgwach 7 месяцев назад
@@FE-Engineer Gotcha. Okay, thanks for the response. :) Yeah it seems that whenever I select a LoRA it's not recognizing it at all and none of the prompts make any difference for it.
@NXMT07
@NXMT07 5 месяцев назад
Thanks for the tutorial, it really did worked with my rx580, albeit very slow. Can you please make a tutorial on how to use huggingface diffusers with automatic1111? I've tried to find the safetensors file and even converted the diffusers into one but to no avail.
@FE-Engineer
@FE-Engineer 5 месяцев назад
Last I knew. Most of the additional pieces of automatic 1111 will not work with ONNX. They might work with only directml. But it has a big performance penalty. Overall for AMD. Your best bet right now is ROCm on Linux. Slightly slower than onnx and olive but all the functionality works correctly. Also nice that you don’t have to fiddle with converting to onnx and the headache that comes with all of that and what does and does not work etc. :-/
@NXMT07
@NXMT07 5 месяцев назад
@@FE-Engineer well I heard that Zluda is enabling CUDA on amd GPU so OONX shouldn't be a problem after a period of development in WindowOS. I have managed to play around with it and can confirm it does indeed work with CUDA-related programs, haven't got it to work with Automatic1111 though. Still, my trouble with the huggingface diffusers remains unsolved, I think it is a entirely new problem
@user-cw8pm3ox1q
@user-cw8pm3ox1q 6 месяцев назад
Thanks so much for the Video! I wonder why do I need Internet connection when converting "normal" models(with safetensors file extension name). Due to my poor Network, python always raises "ReadTimeout" error whenever I click the "Convert & Optimize checkpoint using Olive" button. Do I need to download something else to convert a model? I think I only need my own GPU to compute.
@FE-Engineer
@FE-Engineer 6 месяцев назад
That is interesting. I did not know it needed to get anything from the internet. I am not sure to be honest. Are you running it on like an old spinner hard drive? Is it possible that the read timeout is from your disk drive?
@chrisc4299
@chrisc4299 7 месяцев назад
Hello thank you very much for the video I have a question how I could use vae with the optimized models you have to transform them I appreciate your help since placing the vae in the regular folder does not apply to the generation
@FE-Engineer
@FE-Engineer 7 месяцев назад
You will need to run ROCm in Linux to get full functionality like that.
@nangelov
@nangelov 5 месяцев назад
Sorry to bother you. I've done everything so far, except that when I staart webui, the interface loads but there's no ONNX or OLIVE tabs. Everything is slow on the RX6800XT (1.3s/it) If I enable onnx in the settings, I get missing positional arguments error and I can't generate anything. Someone mentioned to roll back to an older UI Version, I don't see how to do that - there are no different versions for this fork.
@tmiss17
@tmiss17 7 месяцев назад
Thanks!!
@FE-Engineer
@FE-Engineer 7 месяцев назад
You are very welcome! Thanks for watching!
@mjtech1937
@mjtech1937 6 месяцев назад
This a great tutorial. The it/s speeds I'm getting with my AMD 7900 XTX are sick, faster than Midjourney, The only question I have is; has anyone got inpainting working? Otherwise this is an amazing solution for AMD users.
@FE-Engineer
@FE-Engineer 6 месяцев назад
It works without issues if you use ROCm on linux, speed overall for myself takes maybe 10% hit or so. Unfortunately this is using DirectML and ONNX with a lot of optimizations in place. Those same technologies though are somewhat less developed as far as extensions and things just working. So basically, until ROCm is on windows, you kind of have to pick your poison. Dual boot system and running linux, or the variations of different ways to do it on windows of which all have some serious drawbacks.
@waltherchemnitz
@waltherchemnitz 2 месяца назад
What do you do if when you run Venv you get the message "cannot be loaded because running scripts is disabled on this system"? I'm running the terminal as Adminstrator, but it wont let me run venv.
@magnusandersen8898
@magnusandersen8898 5 месяцев назад
I've followed all your steps up untill the 8:00 minute mark, where I after running the webui-user.bat file, get an error saying "launch.py: error: unrecognized arguments: --onnx". Any ideas how to fix this?
@FE-Engineer
@FE-Engineer 5 месяцев назад
Remove -onnx
@RobertJene
@RobertJene 5 месяцев назад
10:20 use Ctrl+G to jump to a specific line in notepad
@BOIWHATmusic
@BOIWHATmusic 6 месяцев назад
Im stuck on installing the requirements line, taking a really long time. Is this normal?
@FE-Engineer
@FE-Engineer 6 месяцев назад
Depends on internet connection and some other things. But yes. It is not exactly fast.
@DigitalID234
@DigitalID234 7 месяцев назад
work and thanks
@FE-Engineer
@FE-Engineer 7 месяцев назад
You are welcome! Thanks for watching! :)
@Krautrocker
@Krautrocker 7 месяцев назад
Soooo, i initially installed automatic1111 using your first video on the matter, which was troubleshooting the official guide. Before i tear that down and reinstall the whole jazz, what exactly is different? Does this fix lift the limitations (like high res stuff not working) or is it 'just' about running it more stable?
@FE-Engineer
@FE-Engineer 7 месяцев назад
No, there was an update recently -- for many folks it broke. In the video description I tried to be clear saying if your setup works fine, don't bother with any of this. This is just to get things working for folks who got a new github update to the code and everything entirely broke and they were not able to use it at all.
@nextgodlevel4056
@nextgodlevel4056 7 месяцев назад
great tutorial, but i have a doubt that when I try to optimize some other stable diffusion model, they optimize correctly but the output image they gave is not very clear its always generate some foggy images. Also I can't able to generate images which have size greater then 512 x 512, and the otherway I do this is by upscale the resolution of 512x512 images within in the stable diffusion and its giving very good output aswell. my GPU: 6750XT
@FE-Engineer
@FE-Engineer 7 месяцев назад
If the image looks foggy like that. It likely means you need to run a vae with the model. I don’t remember offhand if I was ever able to get a vae to work properly with auto1111 on windows though. Sorry.
@mojlo4ko998
@mojlo4ko998 7 месяцев назад
legend
@FE-Engineer
@FE-Engineer 7 месяцев назад
😂 thank you! I hope this helped!
@macnamararj
@macnamararj 7 месяцев назад
Thanks for the tutorial! I saw an decrease on the generation time, but it still showing around 3.0it/s, either on optimized and not optimized models. Anything I can do to improve the generation? And how can I add the new sampling methods like DPM++ 2M SDE Karras?
@FE-Engineer
@FE-Engineer 7 месяцев назад
Those other samplers don’t work in this with ONNX. I forced them on. It broke. :-/ Hmm that is strange that you don’t see a change in speed on optimized vs unoptimized. Makes me think something is fishy.
@macnamararj
@macnamararj 7 месяцев назад
@@FE-Engineer I was breaking my head to make the sampling work, but no success. I've tried a fresh install, and the it/s still around 3its/s, there is a huge difference in speed using --onnx, it used to take 1min to generate a 512x512 image, now it takes around 6s. So I think its a big win! Again thanks for the video.
@markdenooyer
@markdenooyer 7 месяцев назад
Has anyone gotten past the 77 token limit ONNX DirectML on the prompt? I really miss my super-long prompts. :(
@FE-Engineer
@FE-Engineer 7 месяцев назад
Not with this version on windows yet. :-/
@wilcoengelsman8159
@wilcoengelsman8159 5 месяцев назад
Thank you for the guide, it is however already slightly outdated. I did manage to get everything working though using this tutorial. When I use Olive/ONNX instead of just directml my image has a lot more noise, even on the same sampler. Is there something i can do about that? Also, generation larger than 512x512 crashes the onnx implementation.
@FE-Engineer
@FE-Engineer 5 месяцев назад
So you don’t need to use -onnx anymore in the command argument when launching. When using ONNX it has a lot of peculiarities and most things other than generating an image do not work properly with ONNX sadly.
@mrhobo7103
@mrhobo7103 7 месяцев назад
great tutorial, mine stopped working a few days ago and coudlnt find a fix anywhere, although for some reason generating an image makes my pc slow to a crawl and it didn't do that before it broke. the image generation itself is still fast though. 6600XT
@FE-Engineer
@FE-Engineer 7 месяцев назад
Image generation makes your pc slow down? Interesting. Did you previously use any unusual flags?
@FE-Engineer
@FE-Engineer 7 месяцев назад
I would not be surprised about this during like model optimization. But image generation it does surprise me a bit…
@macnamararj
@macnamararj 7 месяцев назад
@@FE-Engineer same here, it slow down too, the non onnx/olive this didnt happens.
@user-db9pl9oh4b
@user-db9pl9oh4b 6 месяцев назад
Thanks for the video. May I ask what's your GPU and how's the performance? Cheers!
@FE-Engineer
@FE-Engineer 6 месяцев назад
7900 XTX :) Onnx/olive - 22 it/s ROCm - 18 it/s DirectML non ONNX - 6 it/s
@user-db9pl9oh4b
@user-db9pl9oh4b 6 месяцев назад
@@FE-Engineer I'm interested to know if you really need an Nvidia GPU or AMD. Perhaps a good video to make in the future where you compare the two GPU makers? Thanks!
@Guillermo-th4dh
@Guillermo-th4dh 7 месяцев назад
Hello sensei, I tell you that the entire tutorial is 10 out of 10... I just wanted to ask you, I have a problem when I want to optimize SDXL models, even other custom ones from "civitai" and they give me an error. What could it be? thanks
@FE-Engineer
@FE-Engineer 7 месяцев назад
I have not been able to get SDXL working on this setup with automatic1111 directml. I tried a few months ago and could not get it working. I have not honestly tried with it recently. I also run ROCm on Linux and that setup just basically works for everything. So I did that to get SDXL and bypass all the complexity of what you can and can not do with directML and ONNX.
@user-mg7fv9cx8b
@user-mg7fv9cx8b 7 месяцев назад
Thanks for your video. You are my hero :-) I thought I never get SD on my AMD running - until I saw your video... I tried also to use another checkpoint - stable-diffusion-inpainting. I was able to download the model, the log says: Model saved: C:\..\sd-test\stable-diffusion-webui-directml\models\ONNX-Olive\stable-diffusion-inpainting When I try to use that model I get RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running Conv node. When I try to optimize the model I get "...\sd_olive_ui.py", line 358, in optimize assert conversion_footprint and optimizer_footprint AssertionError Is it somehow possible to use inpainting model on AMD? Or what am I doning wrong?
@FE-Engineer
@FE-Engineer 7 месяцев назад
So you can definitely do it with ROCm on Linux. In windows I haven’t been able to get inpainting working properly.
3 месяца назад
Any method works for me, I have this error: AttributeError: module 'onnxruntime' has no attribute 'SessionOptions'
@lake3708
@lake3708 5 месяцев назад
An excellent guide, but I have a question: there's a .safetensors checkpoint that has a config attached in the .yaml format. After optimization, the program stops seeing the config and generates noise. Do you have any idea how to fix this problem?
@FE-Engineer
@FE-Engineer 5 месяцев назад
Ohh. Not sure on that one. But I have a new video coming out with a much better way of doing this!
@Hozokauh
@Hozokauh 6 месяцев назад
at 7:00, you got it to skip the torch/cuda test error finally. for me however, it did not resolve the issue. went back and followed the steps twice over and same result. still getting the torch cuda test failure. any ideas?
@FE-Engineer
@FE-Engineer 6 месяцев назад
-use-directml in your startup script
@FE-Engineer
@FE-Engineer 6 месяцев назад
I did not skip the torch and cuda test. From my experience if you are having problems and skip it. It will never work because that test is designed to simply check if it thinks it can run on the GPU.
@Hozokauh
@Hozokauh 6 месяцев назад
@@FE-Engineer thank you for the timely feedback! You are the best. Will try this out!
@Wujek_Foliarz
@Wujek_Foliarz 5 месяцев назад
stderr: ERROR: Could not install packages due to an OSError: [WinError 5] Access is denied: 'C:\\Users\\igorp\\Desktop\\crap\\stable-diffusion-webui-directml\\venv\\Lib\\site-packages\\onnxruntime\\capi\\onnxruntime_providers_shared.dll' Check the permissions.
@ALKSYM
@ALKSYM 5 месяцев назад
add "--reinstall-torch" in the args and launch webui-user.bat, after the ui launched, delete the arg " --reinstall-torch", hope it help
@carterstechnology8105
@carterstechnology8105 6 месяцев назад
also curious how to optimize my iterations per sec. currently running 3.08it/s (AMD Ryzen 9 7940HS w/Radeon 780M graphics, 4001Mhz, 8 Core(s), 16 Logical Processor(s)64 GB RAM
@FE-Engineer
@FE-Engineer 6 месяцев назад
Using your GPU is the first step to optimizing. Using arguments like no half are required for some people or some models but will hurt performance usually. Remember that even the top of the line amd 7900xtx gets about 20 it/s currently. So 3 is not necessarily bad and depending on the resolution of the images might be very good
@kaireisser
@kaireisser 6 месяцев назад
Does anyone else has the problem that inpaint does not insert its work into the image and that no faceswap extension works (the input faces just get ignored) ? Since it seems that I am the only one having that problem could it be that it's because of the AMD/DIRECTML Setup?
@FE-Engineer
@FE-Engineer 6 месяцев назад
Inpainting with ONNX does not work properly at all.
@gkoogz9877
@gkoogz9877 7 месяцев назад
Great video, Any tips to use more than 77 tokens on this method? It's a critical limitation.
@FE-Engineer
@FE-Engineer 7 месяцев назад
You can use it without ONNX but performance takes a big hit. Or run it with ROCm on Linux. Or wait for ROCm on windows whenever that will be.
@Maizito
@Maizito 5 месяцев назад
I finally manage to run SD with your tutorial, I have an Rx7000, it didn't let me run with --onnx, I saw that in the comments they mention that that command is no longer necessary, so I removed it from the user-bat, and it opens the SD , but it goes very slow, it works between 1.5 and 2.5 it/s, any solution to make it go fast?
@W00PIE
@W00PIE 5 месяцев назад
That's exactly my problem at the moment with a 7900XTX. Really disappointing. Did you find a solution?
@Maizito
@Maizito 5 месяцев назад
@@W00PIE No, I haven't found a solution yet :(
@livb4139
@livb4139 5 месяцев назад
can you make vid on how to make ollama run on rx 7900xtx
@matthieu3967
@matthieu3967 6 месяцев назад
Thanks for the video but do you know how to add sampling methods ?
@FE-Engineer
@FE-Engineer 6 месяцев назад
Don’t use ONNX or go ROCm in Linux. You can’t use the other samplers with ONNX.
@nikolass4925
@nikolass4925 7 месяцев назад
I am getting an AssertionError when optimizing SD1.5 on Olive, apparently because the safety checker file is missing. If I uncheck the saftey checker box I get an assertion error after 30 secs "unable to load weights from checkpoint file" Anybody else ran into this issue?
@FE-Engineer
@FE-Engineer 7 месяцев назад
I have generally learned not to uncheck safety checker. Whenever I got problems unchecking it always resulted in an error. Either it would work with safety checker. Or if it did not work then unchecking safety checker never ended up working. If your safety checker is missing you might want to start fresh. Because as far as I know. That file should be there.
@TheBrainAir
@TheBrainAir 3 месяца назад
i do all steps and / AttributeError: module 'torch' has no attribute 'dml'
@gyrich
@gyrich 5 месяцев назад
Thanks for this. I can actually run SD on my AMD pc but it doesn't seem that it's using the GPU (RX 6600 8gb) at all. I can render individual images in ~30-60 secs. None of the solutions I've found online make it use the GPU. Do you know how I can get it SD to use the GPU so I can generate more/faster?
@FE-Engineer
@FE-Engineer 5 месяцев назад
Yes. Stay tuned. I have a new video coming out because the code has changed a decent amount and there is a better way now!
@catnapwat
@catnapwat 7 месяцев назад
Thank you for this! Is there any way to get DPM++ 2M SDE Karras working? It doesn't seem to be available on the directml version of A1111
@FE-Engineer
@FE-Engineer 7 месяцев назад
No. I literally forced them on. It breaks. So there are other software things that need to get fixed before those will work in onnx I believe.
@catnapwat
@catnapwat 7 месяцев назад
@@FE-Engineer thanks, understood. Do you also know if there's a way to get ComfyUI to run at the same pace as A1111? 6700XT here and I'm seeing 4.5it/sec with A1111 but only 1.1s/it with Comfy
@Justin141-w3k
@Justin141-w3k 7 месяцев назад
This is the only tutorial that has worked.
@Justin141-w3k
@Justin141-w3k 7 месяцев назад
New issues. I managed to generate an image of a car though.
@FE-Engineer
@FE-Engineer 7 месяцев назад
You are seeing new issues?
@Justin141-w3k
@Justin141-w3k 7 месяцев назад
Regarding the only valid links being hugging face.@@FE-Engineer
@Justin141-w3k
@Justin141-w3k 7 месяцев назад
After optimizing I receive this error: InvalidProtobuf: [ONNXRuntimeError] : 7 : INVALID_PROTOBUF : Load model from C:\AI\stable-diffusion-webui-directml\models\ONNX-Olive\stable-diffusion-v1-5\unet\model.onnx failed:Protobuf parsing failed.@@FE-Engineer
@user-zt3vr3hs7u
@user-zt3vr3hs7u 5 месяцев назад
Unfortunately it doesn't work for my RX 5700. I've reached "olive tab" step, it fails to optimise models. For future tutorials better use certain git commits because it seems like developers like to do breaking changes for AMD users.
@terraqueojj
@terraqueojj 6 месяцев назад
Good evening, thanks for the Video, but the problems with ControlNet and Image Dimensions continue. Do you know if there is any update for this in the pipeline?
@FE-Engineer
@FE-Engineer 6 месяцев назад
I do not. Although I am somewhat unsure how much more support overall this fork of automatic1111 will get ultimately. I think it’s just a bit of a waiting game for rocm on windows
@rivariola
@rivariola Месяц назад
hello sir I keep getting an error that is driving me nuts: DLL load failed while importing onnxruntime_pybind11 do you know what it means?
@user-uz5cg9bu4r
@user-uz5cg9bu4r 5 месяцев назад
hey, i don't get the onnx and olive tabl shown in my automatic111 do i need to manually install them? i see that onnx is running and olive just not at all, i followed every step but idk man
@FE-Engineer
@FE-Engineer 5 месяцев назад
The ONNX argument is no longer necessary. They changed code. Things are kind of wacky. I need to go and figure out what all has changed.
@astarwolfe1411
@astarwolfe1411 7 месяцев назад
I’m not sure if this is a batch issue or a computer issue, but after getting the error and the “press any key to continue…” when I press any key the prompt closes immediately and doesn’t let me type anything in
@FE-Engineer
@FE-Engineer 7 месяцев назад
I saw someone else mention something similar. I’ll go in and take a look here when I get some time. I’m not sure if something maybe changed?
@semirvin
@semirvin 6 месяцев назад
i found a solution for that. When you try to double click and run webui-user exe, console will immediately close. Try to run it with cmd promt. Like exactly on this video. After that console wont close.
@Slavius84
@Slavius84 Месяц назад
I have rx570, but it so slow. I don't know what to do with this. 512 X 512 is generated almost 5-6 min. It's meant what I need to buy new video videokard?
@pyrageis9928
@pyrageis9928 5 месяцев назад
I get an error stating "AttributeError: module diffusers.schedulers has no attribute scheduling_lcm. Did you mean: 'scheduling_ddim'?" edit: just had to delete venv folder
@NewHaven321
@NewHaven321 6 месяцев назад
I get the following error when running the webui-user.bat file, "launch.py: error: unrecognized arguments: --onnx". I can still run if I remove the --onnx parameter but I will have no Olive or ONNX tab in the interface. Appreciate any input here.
@MattStormage
@MattStormage 6 месяцев назад
Same here
@IJN-Yamato
@IJN-Yamato 6 месяцев назад
Hi! An update has been released and --onnx is now automatically installed and does not require an argument in webui-user
@Omen09
@Omen09 6 месяцев назад
@@IJN-Yamato but it doesn't show onnx in gui
@GenericYoutuber1234
@GenericYoutuber1234 6 месяцев назад
@@Omen09 you can fix this now with git checkout d500e58a65d99bfaa9c7bb0da6c3eb5704fadf25. This will go back to the old version before the update. You may need to delete your requirements file with the change to add torch-directml before doing it. Then, run the webui-user.bat after changing it to include the command line parameters --use-directml --onnx. This will give you the ONNX tab like before, where you can follow the video from around the 8 minute mark.
@FranciscoSalazar-qi4mw
@FranciscoSalazar-qi4mw 5 месяцев назад
I have the same error, have you been able to fix it? As they say in the comment, it is supposed to be automatic but the onnx does not appear
@obiforcemaster
@obiforcemaster 3 месяца назад
This no longer woks unfortunately. The --onnx comand line argument was removed.
@CESAR_CWB
@CESAR_CWB 7 месяцев назад
thx bro
@FE-Engineer
@FE-Engineer 7 месяцев назад
Glad this helped! I was getting a lot of comments from people about it being broken so figured I would find an interim step to get it working for people.
@Azure1Zero4
@Azure1Zero4 6 месяцев назад
Does anyone know possibly why when using a Olive optimized model LORA do not seem to be influencing the images much?
@FE-Engineer
@FE-Engineer 6 месяцев назад
My understanding is that Lora’s do not work with onnx currently.
@Azure1Zero4
@Azure1Zero4 6 месяцев назад
That's what it seems like. Thanks for the confirmation.@@FE-Engineer
@Verlaine_FGC
@Verlaine_FGC 5 месяцев назад
I keep getting this error: "launch.py: error: unrecognized arguments: --onnx"
@FE-Engineer
@FE-Engineer 5 месяцев назад
-onnx is no longer needed. They changed the code. Just omit that from command arguments
Далее
I Killed My 7800X3D...
19:28
Просмотров 42 тыс.
JPEG is Dying - And that's a bad thing
8:09
Просмотров 71 тыс.
Китайка Шрек всех Сожрал😂😆
00:20
Why Are Open Source Alternatives So Bad?
13:06
Просмотров 591 тыс.
I switched back to AMD... and I have no regrets.
24:11
Просмотров 483 тыс.
Why Do So Many YouTubers Run Arch Rather Than Mint?
12:09
Stable Swarm UI - GPU-Network Rendering
9:55
Просмотров 32 тыс.