Тёмный

How to run Stable Diffusion on AMD Graphics Cards | AI on AMD Graphics 

RisingPhoenix
Подписаться 537
Просмотров 19 тыс.
50% 1

Опубликовано:

 

4 окт 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 168   
@stellarluna659
@stellarluna659 9 месяцев назад
If you get the error: RuntimeError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check then add "--use-directml --reinstall-torch" to the COMMANDLINE_ARGS in the webui-user.bat file through notepad this way SD will run off your GPU instead of CPU
@artymusoke1352
@artymusoke1352 9 месяцев назад
Took me a whole day of trying different tutorials to find this one, thank you, it worked
@visiblydisturbed1688
@visiblydisturbed1688 8 месяцев назад
I think I'm doing something wrong but I'm so incredibly incompetent that I can't figure out exactly what, I now have the error "launch.py: error: unrecognized arguments: --use-directml
@xangre
@xangre 8 месяцев назад
@@weichaog5585 be sure u paste this line: --use-directml --reinstall-torch ...without the " "
@KingOfGameworld
@KingOfGameworld 8 месяцев назад
Thank you for explaining how to fix something that THREE DIFFERENT TUTORIALS I've watched failed to explain. I stayed up til 4am last night trying to figure out why SD was using my CPU instead of my RX 6700xt.
@Spudicus-jv3kz
@Spudicus-jv3kz 8 месяцев назад
You are a god, thank you.
@twostepghouls
@twostepghouls 10 месяцев назад
My dude, I cannot thank you enough. Every other tutorial I followed resulted in frustration and errors. Yours is perfect.
@moncyn1
@moncyn1 11 месяцев назад
i like its actual human voice and not synthesiser unlike other videos about direct ml sd
@DaddyNameless
@DaddyNameless 10 месяцев назад
I've a 6600xt and your video didn't work as expected for me, Only my CPU was being utilized. I switched my python 3.10.11 and use these launch options. "--medvram --backend directml --no-half --precision full --opt-sub-quad-attention --opt-split-attention-v1 --disable-nan-check --theme dark --autolaunch" Seemed to fix my issue. *Pin this as it may help others.*
@Sum5thefirst
@Sum5thefirst 10 месяцев назад
yes Ive been having this problem too. where do I edit those launch options?
@Sum5thefirst
@Sum5thefirst 10 месяцев назад
if I edit in the batch file, it says unknown arguments : --opt-split-attention-v1 --backend directml ??
@kahl452
@kahl452 11 месяцев назад
i can't believe this worked. I'm running a 6750 and holy crap it worked! thanks man!
@thaido1750
@thaido1750 11 месяцев назад
Excuse me but how many it/s does it have if i may ask?
@kahl452
@kahl452 11 месяцев назад
if you mean, "vram"? it has 12 gigabytes @@thaido1750
@Mootai1
@Mootai1 10 месяцев назад
Hi ! how and where can you see your it/s plz ? i can't find this information.@@thaido1750
@mchawleyii
@mchawleyii 9 месяцев назад
my 6750xt is running out of vram with medvram but not with lowvram. any similar issues there?
@rwarren58
@rwarren58 6 месяцев назад
How much RAM do you have?
@seymoria3972
@seymoria3972 10 месяцев назад
This is the best tutorial so far, I got not even single error, diffrent models are working, I can use loras, embeddings. Thank you so much. And btw, I have rx570 so if someone is thinking ,,is it gonna work'' yep it is.
@Snlth48
@Snlth48 10 месяцев назад
I'm using rx 580 and it says insufficient vram
@NorHouda-h6x
@NorHouda-h6x 7 месяцев назад
I got 580rx 8gb and it says Failed to automatically patch torch with ZLUDA. Could not find ZLUDA from PATH.
@PanKrewetka
@PanKrewetka 10 месяцев назад
Hope to see more about AI and AMD gpu
@uki555
@uki555 9 месяцев назад
This is the best tutorial i've ever watched thank you so much, your video is so underrated brother. I wish you all the best
@delos2279
@delos2279 10 месяцев назад
Great tutorial, both for setting it up and using it. Working great on 7800 XT, no issues so far.
@夜々宮
@夜々宮 10 месяцев назад
How? I'm using the same setup, but the VRAM consistently reaches 16GB, and the processing speed is barely 3 it/s for a 512x512 image, and usally runs out of VRAM i need some help
@delos2279
@delos2279 10 месяцев назад
@@夜々宮 I had VRAM issues for the Inpaint and Inpaint Sketch features, or if I tried batch size over 1. So first make sure batch size is 1. Then try this: 1. Open webui-user (.bat file) in a text editor. (make a backup of the original) 2. Find: "set COMMANDLINE_ARGS=" line and add: --lowvram --precision full --no-half --autolaunch The full line should be: set COMMANDLINE_ARGS=--lowvram --precision full --no-half --autolaunch Save file, and try running it. Or you can just try with "--lowvram" If that doesn't help idk, since it works for me.
@ToddGoo
@ToddGoo 8 месяцев назад
I am using 7800xt too. I tried several none of works for me. Will try this one later. Thanks for your info.
@greatturki6229
@greatturki6229 10 месяцев назад
you are a fu**ing legend. the goat. I wish everybody made tutorials like you. thank you so much.
@ShacoChampagne
@ShacoChampagne 11 месяцев назад
thanks, it worked fine on my rx 6700 xt
@Silvane911
@Silvane911 9 месяцев назад
Confirmed working great on my laptop AMD 6850M
@jgtully
@jgtully 9 месяцев назад
This repository now sets up for an NVIDIA card, and gives errors regarding lack of CUDA compatibility during installation. Trying to skip past that and run it results in "RuntimeError: "LayerNormKernelImpl" not implemented for 'Half' " error when trying to generate anything.
@pigsymcpigface
@pigsymcpigface 9 месяцев назад
Excellent tutorial! Thank you!
@nsf3smm833
@nsf3smm833 9 месяцев назад
This is the most underwatched video of all time. AMD 6700xt is now working hard at creating images. Thank you sir!
@Colreza
@Colreza 9 месяцев назад
Mine is using the cpu instead of the gpu, do you have any info to fix it? I have the same gpu
@gerry._.y
@gerry._.y 9 месяцев назад
this run surprisingly well. but as you mentioned when i checked my cpu/gpu usage. it actually using my cpu to generate images. how to switch to gpu?
@hitmanehsan
@hitmanehsan 8 месяцев назад
hello.i got this error on my 6800xt: File "I:\AI SD AMD\stable-diffusion-webui-directml\launch.py", line 48, in main() File "I:\AI SD AMD\stable-diffusion-webui-directml\launch.py", line 39, in main prepare_environment() File "I:\AI SD AMD\stable-diffusion-webui-directml\modules\launch_utils.py", line 560, in prepare_environment raise RuntimeError( RuntimeError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this chec
@sandmannneil1
@sandmannneil1 8 месяцев назад
i get this error, i followed your tutorial step by step and other ones but keep getting this error. RuntimeError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check Press any key to continue . . .
@sandmannneil1
@sandmannneil1 8 месяцев назад
Nevermind found it in comment. The best tutorial by far, i tried alot. Well done
@unclekracker2684
@unclekracker2684 9 месяцев назад
how do i fix the "Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check" error, even when i used a guide like ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-vq-QQGV-NOg.html to add" -precision full --no-half --skip-torch-cuda-test" to command args, it generated images very slowly, as if it was using my cpu and not my gpu, i have a 7800 xt, can i have some help?
@cancel8559
@cancel8559 9 месяцев назад
i'm having the same issue with the same gpu 😭
@Colreza
@Colreza 9 месяцев назад
any updates bro? having the same problem xd
@jeremyvolland8508
@jeremyvolland8508 9 месяцев назад
I'm using a Radion 6700 and keep getting "RuntimeError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check" If I do that though, it will use the CPU instead of the GPU. Any suggestions?
@Colreza
@Colreza 9 месяцев назад
i have the same one, did u find something to fix it?
@gv-art15
@gv-art15 2 месяца назад
I have the same problem you can fix it?
@DeepakSingh-vn6ls
@DeepakSingh-vn6ls 9 месяцев назад
RuntimeError: Couldn't install torch.
@Limmo1337
@Limmo1337 9 месяцев назад
Nice vid, but how do increase my vram usage. I have a 7900xtx and only use about 2,4gbs of my 24gbs memory when i render an image..
@WillPl4Y
@WillPl4Y 8 месяцев назад
i follow every step but when i try to instal it (double click) webui-user there is error said runtimerror : torch is not able to use gpu; add --skip-torch-cuda-test if i did that,ican install stable diffusion but it will use my cpu to generate spek r5 5500 rx 6650xt 16 gb dual channel
@ulterdamin5318
@ulterdamin5318 Месяц назад
I guess it's 6000 and above because I have a 5700xt and did not work either. Same message
@HNS-007
@HNS-007 11 месяцев назад
you saved my life dude!
@RisingPhoenix96
@RisingPhoenix96 11 месяцев назад
Glad I could help.
@cadewhite8670
@cadewhite8670 9 месяцев назад
I have Git and Tortoise Git and Python 3.10.6 and used the DirectML version of stable diffusion but I still get this error: Traceback (most recent call last): File "E:\ai AMD\stable-diffusion-webui-directml\launch.py", line 48, in main() File "E:\ai AMD\stable-diffusion-webui-directml\launch.py", line 39, in main prepare_environment() File "E:\ai AMD\stable-diffusion-webui-directml\modules\launch_utils.py", line 560, in prepare_environment raise RuntimeError( RuntimeError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check I have an 6700xt.
@mikeraft3694
@mikeraft3694 9 месяцев назад
Looks like the latest Git update directs install with cudo. Means no AMD.
@mikeraft3694
@mikeraft3694 9 месяцев назад
Finally that helped me: set COMMANDLINE_ARGS=--use-directml --reinstall-torch
@luisveliz6411
@luisveliz6411 9 месяцев назад
@@mikeraft3694 you my friend are a hero thanks
@cadewhite8670
@cadewhite8670 9 месяцев назад
@@mikeraft3694 I don’t know where I would put this ? Could you be more specific? Thank you for your help.
@NalomYT
@NalomYT 8 месяцев назад
THX MAN!!!!!!!!!!!!!!!!!@@mikeraft3694
@NorHouda-h6x
@NorHouda-h6x 7 месяцев назад
RuntimeError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check
@IRG_Production
@IRG_Production 9 месяцев назад
Hello, I did everything by following you, but I got an error at the end. stderr: the system cannot find the path specified. Wrote
@gaviera4657
@gaviera4657 10 месяцев назад
I followed all the steps and like the other tutorials, it didn't work. Basically it shows how to install, the problem is that, at the end of the installation, it always gives an error. It always displays at the end: " raise Exception(f"Invalid device_id argument supplied {device_id}. device_id must be in range [0, {num_devices}).") Exception: Invalid device_id argument supplied 0. device_id must be in range [0, 0)". I'm exhausted from trying so many times.
@XBIssues
@XBIssues 7 месяцев назад
any ideas for this error during the install "Failed to automatically patch torch with ZLUDA. Could not find ZLUDA from PATH."
@Eduard_Kolesnikov
@Eduard_Kolesnikov 10 месяцев назад
thank you bro for help
@romina_minka
@romina_minka 8 месяцев назад
I have a RX 6950 XT 16GB and i still have this error..: RuntimeError: Could not allocate tensor with 12615680 bytes. There is not enough GPU video memory available! :(
@RosaMunwalker
@RosaMunwalker 11 месяцев назад
Thanks for the video! Is there a way to use only cpu instead of gpu for stable diffusion?
@RisingPhoenix96
@RisingPhoenix96 11 месяцев назад
Try adding this to the webuser config file: --precision full --no-half
@ultralaggerREV1
@ultralaggerREV1 10 месяцев назад
I wouldn’t recommend using the CPU… it’s gonna take way longer
@benhough
@benhough 11 месяцев назад
Thanks for the tutorial. I tried to do this with a 7900xtx, but it doesn't seem to work on the GPU at all. I have tried with and without xformers. It only uses the CPU and takes forever. Any suggestions?
@thelaughingmanofficial
@thelaughingmanofficial 10 месяцев назад
You messed up with the installation somewhere then because it's running just fine on my 7900XTX.
@benhough
@benhough 10 месяцев назад
​@@thelaughingmanofficial I redid the installation many times, it did not want to install. I didn't feel like refunding the GPU so I switched to Linux and now everything works perfectly. Thanks for your help though.
@Sum5thefirst
@Sum5thefirst 10 месяцев назад
damn Im having the same problem you were but i don't want to go to Linux 😭😭@@benhough
@loganbearden
@loganbearden 9 месяцев назад
ive followed everything, but it says python was not found. how do i fix that? And yes i did check the box to add to PATH.
@seraphin01
@seraphin01 11 месяцев назад
the part I still don't understand is how come my 16gb 7800xt with 16gb of ram get the "out of memory" error if I try to use hires fix, while my 2080 8gb does that just fine and render images faster as well.. I mean for the speed I can understand due to cuda optimization, but the Vram I just don't get it..
@fly1063
@fly1063 11 месяцев назад
Hi how did you not get this error with 8go vram? on my 3070 it was impossible to hires straight after the image generation. I am now on a 7800XT and still the same problem.
@sungsukim692
@sungsukim692 8 месяцев назад
webui-user.bat after running Error Traceback (most recent call last): File "D:\stable-diffusion-webui-directml\launch.py", line 48, in main() File "D:\stable-diffusion-webui-directml\launch.py", line 39, in main prepare_environment() File "D:\stable-diffusion-webui-directml\modules\launch_utils.py", line 560, in prepare_environment raise RuntimeError( RuntimeError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check
@Daniel-jh7pl
@Daniel-jh7pl 9 месяцев назад
did it like in your guide on amd, but stable uses my cpu not the gpu, what could be wrong?
@xenormxdraws
@xenormxdraws 10 месяцев назад
I'm getting the error, "git did not exit cleanly (exit code 128)" when I attempt to clone. It's always stopping at 37% Any help?
@OtakuDYT
@OtakuDYT 11 месяцев назад
I would say just install Stability Matrix and install the Stable Diffusion Package with DirectML enabled and done.
@RisingPhoenix96
@RisingPhoenix96 11 месяцев назад
Interesting idea.
@andytangaming2705
@andytangaming2705 11 месяцев назад
hmm.. can elaborate a bit more? noob here haha 😅😅
@shadyb834
@shadyb834 10 месяцев назад
Should i use midvram or lowvram for an rx 6600xt (8gb)?
@Guiff
@Guiff 10 месяцев назад
If someone has problems because bought a new AMD GPU having before an nvidia; uninstall python and reinstall it.
@DJESHGAMING
@DJESHGAMING 11 месяцев назад
Hey, it stopped working for me since today. It seems to not use my GPU anymore which has 24GB. It gives an error when trying to generate -> AttributeError: 'DiffusionEngine' object has no attribute 'is_onnx' Also it tries to use something with 14GB of memory, maybe my system ram. Any idea how to solve this?
@RisingPhoenix96
@RisingPhoenix96 11 месяцев назад
I've never encountered this error, but it seems like a few people have. I've looked around and found a temporary fix: github.com/lshqqytiger/stable-diffusion-webui-directml/issues/296#issuecomment-1751820370 You have to go to the Stable Diffusion folder and right-click in there to open the context menu. Select "Open Git Bash here" and a command window will appear. In the window, type "git checkout f935688" (without quotes) and then press Enter. This will change the active branch of the Stable Diffusion repository. The contents of the folder will, apparently, contain code that fixes the error you've mentioned. After that, you have to edit the webui-user batch file and remove the "git pull" text to prevent Git from trying to pull the latest changes each time you run the file. Save the file after removing the text. Hopefully, you should be good to go after this. This is only a temporary fix. After a while, you should switch back to the "master" branch to receive the latest updates. You basically have to undo all of the above as follows: 1. Go to the Stable Diffusion folder and right-click. 2. Select "Open Git Bash here". 3. Type "git checkout master" without quotes and press Enter to switch to the master branch. 4. Edit the webui-user batch file and add "git pull" at the top so Git pulls the latest changes from the master branch. 5. Run the webui-user batch file. I hope that makes sense.
@humansvd3269
@humansvd3269 11 месяцев назад
When I runt he BAT file for web.ui I keep getting a socket error.
@ThomasMeier-c9v
@ThomasMeier-c9v 6 месяцев назад
Fatal: No names found, cannot describe anything. Did i forget something?
@SpaceandUniverse72
@SpaceandUniverse72 8 месяцев назад
It doesn't work for me, it stops here and gives me this error, does anyone have a solution? E:\stable-diffusion-webui-directml>git pull Already up to date. venv "E:\stable-diffusion-webui-directml\venv\Scripts\Python.exe" fatal: No names found, cannot describe anything. Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)] Version: 1.7.0 Commit hash: d500e58a65d99bfaa9c7bb0da6c3eb5704fadf25 Traceback (most recent call last): File "E:\stable-diffusion-webui-directml\launch.py", line 48, in main() File "E:\stable-diffusion-webui-directml\launch.py", line 39, in main prepare_environment() File "E:\stable-diffusion-webui-directml\modules\launch_utils.py", line 560, in prepare_environment raise RuntimeError( RuntimeError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check Premere un tasto per continuare . . . my configuration: I7-10700K AMD 6800 16gb 32 gb RAM DDR4 3600Ghz Driver AMD 23.12.1 Windows 10 Pro 64bit
@Not_Hans
@Not_Hans 9 месяцев назад
doesn't work. Gives a cuda error... tells you to by pass the cuda process and will only use CPU instead of GPU.
@ThomasMeier-c9v
@ThomasMeier-c9v 6 месяцев назад
dont works, RuntimeError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check
@user-mfsc-2024
@user-mfsc-2024 9 месяцев назад
All AMD cards or just 7000 series cards ?
@Blue_Razor_
@Blue_Razor_ 10 месяцев назад
Do you have the issue where after it applies the doggetx optimization, it seems to use roughly 2gb more VRAM for seemingly no reason?
@culledmusic6395
@culledmusic6395 7 месяцев назад
Can it run on amd z1 extreme?
@glassmarble996
@glassmarble996 10 месяцев назад
Hey dreambooth and kohya lora training work with amd?
@_mult
@_mult 10 месяцев назад
using Ryzen APU processor 5600g?
@computerjoy
@computerjoy 9 месяцев назад
What is min spec to run this on a PC?
@Snlth48
@Snlth48 10 месяцев назад
Is there any way I can use this with fooocus?
@weichaog5585
@weichaog5585 8 месяцев назад
I have 7800xt. first it asked me add --skip-torch-cuda-test to the argument to skip a test. and then it said runtime Error and could not clone stable diffusion with an error code 128.
@lucarollin6033
@lucarollin6033 8 месяцев назад
Same problem and same GPU. I have solved, but I'm not at home now. I'll write to you as soon as possible to tell you which commands to add to the batch file
@weichaog5585
@weichaog5585 8 месяцев назад
thank you very much. appreciated it.@@lucarollin6033
@lespretend
@lespretend 9 месяцев назад
I keep getting "Could not allocate tensor with 134217728 bytes. There is not enough GPU video memory available!" using a 6600 even if i have it at lowvram or medvram at 512x512 with no upscaling wtf is going on
@visiblydisturbed1688
@visiblydisturbed1688 8 месяцев назад
I can't even get it to recognize my GPU
@mysamirapenta5659
@mysamirapenta5659 2 месяца назад
same here, any ideas?
@nextgodlevel
@nextgodlevel 10 месяцев назад
I am not getting all style that u have
@kopidoo
@kopidoo 10 месяцев назад
It is wroking but damn so slooooooow :( Just moved to a 7800Xt from a 3060Ti and the speed went from 2-3it/s to 6-7sec/it... UPDATE: with these command ling arguments it gets a boost: --upcast-sampling --medvram --no-half --precision=full --opt-sub-quad-attention --opt-split-attention-v1 --disable-nan-check
@xenormxdraws
@xenormxdraws 9 месяцев назад
Hey man, I failed to clone the repository because it keeps throwing an error at me when it reaches 37% I'd really appreciate it if you could share the one you cloned with me via Google Drive or something. Thanks.
@xenormxdraws
@xenormxdraws 9 месяцев назад
Hey man, I failed to clone the repository because it keeps throwing an error at me when it reaches 37% I'd really appreciate it if you could share the one you cloned with me via Google Drive or something. Thanks.
@onigirimen
@onigirimen 11 месяцев назад
your GPU is capable to use ROCm, don't bother use DirectML, it'll eat up your VRAM+RAM since DirectML memory optimization is so bad.
@kedicchi
@kedicchi 10 месяцев назад
how do we use rocm on windows?
@onigirimen
@onigirimen 10 месяцев назад
@@kedicchi at this time, straight using windows is no. but you can use WSL2 on windows
@kedicchi
@kedicchi 10 месяцев назад
@@onigirimen tried it but couldnt find a video for that
@TheLoneQuester
@TheLoneQuester 11 месяцев назад
After I generate an image, GPU memory remains at 100% (8GB) until I restart my PC. Any further attempts to generate images results in a memory error. Do you have any tips?
@RisingPhoenix96
@RisingPhoenix96 11 месяцев назад
Try setting the "--lowvram" flag in the config file.
@mathwiz1260
@mathwiz1260 10 месяцев назад
I have an AMD 8GB vram, installed direct ML and i can see the UI, it loads models with no issues but when i click generate image it goes into Unspecified error !! absolutely no idea why it doesn't work.. any idea of why is much appreciated!... AMD is really lagging behind Nvidia on this, it runs smoothly on the other side The automatic version works fine but only uses the CPU at a rate of 20s/it... too slow
@parryhotter3138
@parryhotter3138 9 месяцев назад
i have got a 8gb vega 64 card and ive got 2-7 it per second in automatic1111... so i really dont know what youre doing there. you have to convert the models to onnx. if u dont do that, youll get an error, or very low speed, or wired results. but yes, its a pain in the ass with older amd cards then the 6000 series, because you re not able to run rocm so there is no inpainting available, also you are not able to run many extensions like controlnet, or dreambooth... but just creating images works really fast.
@mathwiz1260
@mathwiz1260 9 месяцев назад
@parryhotter3138 thx I found a way... actually direct ML works fine but only for samplers that are not Karras ... no idea why lol Euler and Euler a work fine and generate at good speed of 3-4s/it ... I get the unspecified error on DM 2m++ karras and other karras samplers ... so I don't use them .. control net is working fine with Euler
@parryhotter3138
@parryhotter3138 9 месяцев назад
​@@mathwiz1260 thats true, you cant use the "newer" samplers. im going for ddpm, dpm, or as you with euler. under linux with rocm and automatic1111, or comfyui youre able to use all samplers, but youll need at least a 6xxx card for that. im going for the 7800xt soon... nvidea is just way to expensive for me. i dont want to go for a 4070 with 12gb which is way more expensive with less vram... so i really hope developers will support amd better in the future, amd has made there turn with rocm.
@xenotron1138
@xenotron1138 10 месяцев назад
I'm running SD on an RX 6600 with no issues other than the amount of time it takes to render. If I add another 6600, would it speed up my generations? Considering the price point and the fact that I'm already invested in one 8gb card, it seems this is a good option. Not really looking to play games on this machine. Just SD and some video editing.
@jeremyvolland8508
@jeremyvolland8508 9 месяцев назад
Are you sure you are actually using the GPU and not the CPU for Stable Diffusion?
@xenotron1138
@xenotron1138 9 месяцев назад
100% sure. @@jeremyvolland8508
@aaaadorime2017
@aaaadorime2017 11 месяцев назад
i cant run it on 6700XT
@RaiserFPS
@RaiserFPS 11 месяцев назад
hello sir, why it is saying "RuntimeError: "LayerNormKernelImpl" not implemented for 'Half'"
@RisingPhoenix96
@RisingPhoenix96 11 месяцев назад
Hi. Try adding the following to the "set COMMANDLINE_ARGS=" section of the config file seen here: 38:28 --precision full --no-half
@RaiserFPS
@RaiserFPS 11 месяцев назад
@@RisingPhoenix96by adding that command error is fixed ,but now it is using 100% of my CPU,its not using GPU . GPU usage is 2%. my GPU is RC6700xt
@RisingPhoenix96
@RisingPhoenix96 11 месяцев назад
@@RaiserFPS OK, I'll see if I can reproduce this error on my end and I'll get back to you as soon as I can.
@RaiserFPS
@RaiserFPS 11 месяцев назад
@@RisingPhoenix96thanks for helping sir
@RaiserFPS
@RaiserFPS 11 месяцев назад
@@RisingPhoenix96 have you find any solution?
@jacobyrassilon
@jacobyrassilon 9 месяцев назад
I somehow got a newer version of python installed and cannot get it to uninstall no matter what. Anyone have any clues on how to install 3.10.6?
@lespretend
@lespretend 9 месяцев назад
You ran Python installer, then clicked 'uninstall python' when it launches? I had to delete my old python installation this way to install this one
@jacobyrassilon
@jacobyrassilon 9 месяцев назад
@@lespretend Yes, I tried that several times. Installed the newer version again, then run the uninstaller in python, then tried the older version and I still get the message that I am running a newer version. Damned frustrating
@KPopTato
@KPopTato 8 месяцев назад
had the same issue, give modify or repair python and then give uninstall
@ryanlevin-pj6ux
@ryanlevin-pj6ux 8 месяцев назад
Doesn't fucking work same error as everything else nothing works my next gpu is gonna be nvidia so I don't have to deal with this bullshit if someone has any ideas mention them please
@timgeurts
@timgeurts 8 месяцев назад
moooooood
@general123ist
@general123ist 11 месяцев назад
Will older laptop 4gb gpus support this??
@RisingPhoenix96
@RisingPhoenix96 11 месяцев назад
If the GPU in question supports DirectX 12, specifically the DirectML library, the yes, it should work. If you manage to get it working, make sure you set the "--lowvram" argument in the config so you don't get (or reduce) VRAM errors.
@andre_tech
@andre_tech 11 месяцев назад
Hi.I have a RX 7600 with 8GB vram. Stable Diffusion keep saying I have insuficiente vram (RuntimeError: Could not allocate tensor with 4915840 bytes. There is not enough GPU video memory available! Time taken: 11.1 sec.) , even when it is the first image im making in the session. I tried the "--medvram-sdxl " argument.
@andre_tech
@andre_tech 11 месяцев назад
Ah, I saw the "--lowvram " argument and now its working =)
@TheLoneQuester
@TheLoneQuester 11 месяцев назад
@@andre_tech For me i can generate an image but then all vram is used and it doesnt refresh unless I restart. Do you know how to fix that?
@andre_tech
@andre_tech 11 месяцев назад
@@TheLoneQuester idk how to fix this. But I close the cmd and reopen it right away without closing the tab in the browser so I dont lose my prompts there.
@andre_tech
@andre_tech 11 месяцев назад
@@TheLoneQuester Then close the stable diffusion new tab every time you restart the CMD, the old one with the prompt continue to work. Thats the only way I know to have faster use of the tool.
@Eleganttf2
@Eleganttf2 11 месяцев назад
no no lowvram is literally only for 2gb vram users and it will really SLOW your generation time by a lot, are you trying to to generate SDXL ?
@UngaBunga-zo7gu
@UngaBunga-zo7gu 11 месяцев назад
I have a RX 5700, will it work?
@kappa173
@kappa173 11 месяцев назад
works for me
@ersin6761
@ersin6761 10 месяцев назад
@@kappa173 can u give ur settings?
@ukrainian333
@ukrainian333 9 месяцев назад
This video - Ultrasonic Sega Google Mega Drive 3D Ultimate HDR Fullscreen 14bit 8K Exclusive Technology 32:9 format IPS AMOLED 60FPS Me - chinese guy that looking to fucking smallest peace of paper LOL Thanks for this video, btw
@awttygaming2510
@awttygaming2510 6 месяцев назад
and you dont even send links like tf
@ghosthunters.network
@ghosthunters.network 5 месяцев назад
getting errors can't get it to run.. command line error.. "model failed to load" and "AttributeError" object has no attribute "lowvram". I had to put 2 lines into my web-user.bat command file.. set COMMANDLINE_ARGS=--skip-torch-cuda-test --lowvram the cuda test was error original in command Line.. then lowvram error and model not load.. even get can't connect at time.. WTF.. anyone know what the hell going on... i been at this for too long! HELP! Thanx..
Далее
Stable Diffusion - Google Colab - AUTOMATIC1111
12:26
Просмотров 128 тыс.
How to Install Stable Diffusion on AMD GPUs (NEW)
8:49
Faster Stable Diffusion with ZLUDA (AMD GPU)
9:06
Просмотров 10 тыс.
Create consistent characters with Stable diffusion!!
26:41