Update: Check this video How to Install Forge UI & FLUX Models: The Ultimate Guide ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-BFSDsMz_uE0.html Here are some useful resources for Stable Diffusion: Download Stable Diffusion Webui Forge from: github.com/lllyasviel/stable-diffusion-webui-forge Download Juggernaut XL version 9 from: civitai.com/models/133005/juggernaut-xl?modelVersionId=348913 More info on FreeU: github.com/ChenyangSi/FreeU Download more ControlNet SDXL models huggingface.co/lllyasviel/sd_control_collection/tree/main Extensions used github.com/ahgsql/StyleSelectorXL and github.com/thomasasfk/sd-webui-aspect-ratio-helper If you have any questions you can post them in Pixaroma Community Group facebook.com/groups/pixaromacrafts/ or Pixaroma Discord Server discord.gg/a8ZM7Qtsqq
I have been looking for a tutorial like this for months. You have a real talent for this tutorial style and I HIGHLY encourage you to keep making these videos. Information is packed and logically flowing from one point to the next. Subscribed!
Does it crash more or less often? I've been using A1111 for a while now, but it feels like it's been crashing more and more. Especially with SDXL models.
Since i switched to forge it didn't crash at all, only when i used control net ceashed if i didn't use image size divisible with 64 @@edouarddubois9402
The width and height of the image, sometimes i got that error when the size was not divisible with 64, but mostly when i used some extensions@@edouarddubois9402
It should be noted, for those who stumbled upon this like I did without knowing any better, that this method only works for nVidia graphics cards. WebUI uses CUDA, which is a proprietary API specifically for nVidia...meaning if you don't have their drivers, you can't natively run Web UI. Luckily there are forks that exist that do work for AMD Radeon cards, but you'll have to jump through a few more hoops than what is shown here in order to install, and it probably won't run quite as fast as it does on nVidia cards.
6:21 Wow, didn't knew about it, I through the only way to change it is to edit it manually in some file I don't remember now. Still, I would like it to have different defaults for each checkpoint, is it possible?
Try this to see if still works , they keep updating the forge so it still have bugs ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-89YRfqArm-Y.htmlsi=kGI45gnzc7iYeFHX
HOLY COW! I've been using A1111 (and now Forge) for a year, so by now I know most of these "hacks", but I wish a video so clear and so thorough existed when I was starting my journey. I even picked up a new nugget here and there. Bravo! Subscribed. And Saved.
@@pixaroma IT has it now; but there are bugs that cause Loras not to generate and it just displays "Error". It's amazing when it works, but annoying how often ForgeUI just has problems. Eventually I just learned how to use Comfy UI and enjoyed having something that actually just works.
@@sociallyresponsiblexenomor7608 yeah that why i switched to comfy also and created that comfyui series, I learn new things each day and got used with nodes
i have a pretty crappy gpu and automatic1111 didnt work very fast for me - abt 3 minutes for one image. i sometimes experiment with stable diffusion for fun, and this tutorial was genuinely so helpful !! forge ui works so much fatser, only about 20-30 seconds per image
I have NVIDIA graphic card but i get this error when run forge: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check. how can I solve this?
so in the last month there has been a lot of updates to the forge ui, today I am working on an update video with what is new. Go to this discussion page to see what they changed and in comments some people had the same problem like yours and it seems has something to do with forge, on the bottom of the page you can see comments and click to load more comments, and with ctrl+f you can search for "torch is" to find comments that have those words github.com/lllyasviel/stable-diffusion-webui-forge/discussions/981 also check this page in the comments how some used different settings in arguments - github.com/AUTOMATIC1111/stable-diffusion-webui/issues/1742
please i got this error when i started run.bat C:\Users\moor\stable-diffusion-webui-forge>python launch.py Traceback (most recent call last): File "C:\Users\moor\stable-diffusion-webui-forge\launch.py", line 1, in from modules import launch_utils ImportError: cannot import name 'launch_utils' from 'modules' (C:\Users\moor\AppData\Roaming\Python\Python310\site-packages\modules.py)
Not sure what is with error, but what you can try is to create a folde on another drive and try again fresh maybe can make it run, I don't know coding but looks like can not important a file, maybe something didn't download or is a bug, that why I say to try a fresh install in another folder
not sure if all those works, but did you installed them from extension? Go to extension tab, click on available, click on Load From button, that will load all, search for an extension, for example tried ratio helper in search and installed just well when i clicked install, and restarted forge.
When I put my model in control net "control_v11p_sd15_openpose.pth" and when I try to generate the image, I have a error message "TypeError: 'NoneType' object is not iterable. My setup is OpenPOSE and processor Openpose_full, can you help me please
i see you are using v1.5, do you get the same on sdxl? I got that error recently but not on control net but when i used an extension at different image size. It works at 1024x1024px? I got that error on different sizes, but worked at 1024x1024px. I dont used v1.5 anymore since sdxl appeared.
so i tried with sdxl control net and i get the same error if i use certain sizes, for example it works if is 1024x1024px, or 1024x576px, but i get that error if i use 1200x672, or 912x512 or 1024x816
@@pixaroma thank you for your answer, I tried all the sizes 1024, 512 etc. it does not work, I have this problem that when I want to use controlnet otherwise to generate an image no problem but when I use controlnet impossible to generate an picture
You can read more about here, i didn't play with them in forge only with canny control net mostly, also keep in mind the version you are using there are different forks of forge now, the main one is used for beta testing and many things might not work! github.com/lllyasviel/stable-diffusion-webui-forge/discussions/178
I'm new to this Stable Diffusion GUI. Experienced people can you please answer is this Forge WebUI is better than Fooocus MRE? If Yes, then in what parameters is it better? Thanks!
Maybe look here gist.github.com/ShMcK/d14d90abea1437fdc9cfe8ecda864b06 aws.amazon.com/blogs/machine-learning/use-stable-diffusion-xl-with-amazon-sagemaker-jumpstart-in-amazon-sagemaker-studio/ as I don't use aws I can't hellp
Should be next to run.bat and environment.bat a file called update.bat i have it there since installation, your should have it to. Just careful with updates to have a good stable version, check this video ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-RZJJ_ZrHOc0.htmlsi=rF-9wCmzResJiW3L
Yeah I am using a voice from voiceair.ai , my voice is ok but my spoken English is not so good and the accent is too strong, I am better with writing, that why that voice can produce a clear English and anyone can understand me, and sounds good for an AI voice.
Either your video card is not good enogh or forge dont recognize it, I am a designer not a coder, but you can try this in webui-user.bat add the following arguments to see if it works, it need at least 6gb of vram and prefers nvidia cards but try it anyway: set COMMANDLINE_ARGS=--skip-torch-cuda-test --precision full --no-half
I see someone already posted taht on the bugs area, you can watch that to see if it gets any response if nothing else works, github.com/lllyasviel/stable-diffusion-webui-forge/issues
This is a great tutorial, but for me using it on Ubuntu makes feel a bit sad, cause most of extensions simply doesn't work, or doesn't want to install. Maybe because of GPU (rtx2060,6gb), but when I had Windows system on same machine, it had more extensions preinstalled and used. Like I don't have Free U and Control Net SDXL. As well as I remember, it worked more better than now. Did they made some new updates, which made it work worse? (Last time used 3 months ago)
They stopped updated for the official version, there are some versions that still around but not sure how much update they get., you can try the last stable version or change to dev2 ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-RZJJ_ZrHOc0.html. I have on older pc same gpu, but i wasnt able to run with control net, it crashed. ComfuUI works ok, but didnt try yet complex stuff
Just updated to latest Forge version , the one that can work with Flux, but using only Sdxl on my 8gb card : every time I do inpainting or img2img the result has lower saturation than the original, it's me or what? Assigning a VAE do not solve 😢
There are a lot of bugs on new version so it will take a while for all to get fixed, this has a similar problem github.com/lllyasviel/stable-diffusion-webui-forge/issues/1189 and if you look at the list of open issues are like 600 github.com/lllyasviel/stable-diffusion-webui-forge/issues
Forge has some basic prompt from image but is not so accurate, in img2img tab under generate it has a paperclip icon, first time will download a model but after that should work faster, and is giving a basic description of the image you uploaded in the img2img.
you can do that with image to image, not txt2img, because if i have the same seed, and different prompt it will look different, the style just ads more words to the prompt
I explain that in this video starting with minute 3 43 ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-q5MgWzZdq9s.htmlsi=WOTQNXWSwiwWBkU6&t=223 in your forge folder you have webui folder and there is a webui-user.bat file where you can put the a1111 paths, just have to switch backslash with forward slash
Mine is off also, can be activated with some command in the bat, I tried but made my generation slower not faster so i left it deactivated, it appears as suggestions on the cmd window when you start and there is also a command I think, i don't remember now, just i know activated was slower for me
You solved the problem I was actually worried about the output speed. So I don't have to worry about CUDA but the internet connection, it should be sometimes fast sometimes slow which affects the output speed. Thanks for the tutorial above.
Is free, you need a good Nvidia card rtx preferably , more vram the faster it is, and need Windows. For other os and card you can also try automatic1111 just is slower a little bit
if i start Stable Diffusion is says: OSError: [WinError 193] %1 is not a valid Win32 application. Error loading "D:\ai\webui_forge_cu121_torch21\system\python\lib\site-packages\torch\lib vfuser_codegen.dll" or one of its dependencies. HELP! xD
Maybe you can report the issue to the people who created the forge ui, I don't have much coding experience, if anyone can know the answer must be them github.com/lllyasviel/stable-diffusion-webui-forge/issues look at existing issue and you also have a button for new issue
First of all, I must say thank you. I started with your videos with the latest one, about Flux, and I am stuck here. Forge UI is fantastic! My only question is if I can find log files about prompts? It would be great to keep them.
Well each prompt and settings are saved in the png you generated, so if you drag the png you like into the png info tab you can see the prompt and setting. For more complex probably you need a script or an extension, on a quick search maybe an extension like this could do something similar, didn't test but maybe gives you some ideas github.com/ThereforeGames/unprompted
@@CsokaErno I use XnView MP as my default image viewer. It has meta-info tab on the right, no need to import images anywhere, you can just copy, alt-tab and paste prompt+properties into your browser. Besides that, it's a really handy piece of software compared to vanilla windows image viewers.
i wonder if any of the stable diffusion UI makers (forge, automatic, comfyui etc) has considered a method for capturing 'recommended model settings' like you point out at 3:29 - as going out and hunting down a model's recommended settings is a work slow-down; perhaps be able to configure a 'model or ksampler template' that can be a quick preset based on the model.. would be kinda cool to have the option to be able to on checkpoint load to trigger the preset (but again should be an optional thing, not everyone would want that in all cases). if this already exists someone let us know
There is a preset saving extension so you can just save settings and give it a name similar to the model you use to know for what it is, but many extensions have bugs since with updates
I just installed a fresh copy today and I didn't run on any problems, so maybe try again, not sure what is the problem, if you have Nvidia usually works ok
@@pixaroma i solved it , actually i was trying to install some extension , but maybe some error happened , , later on i was panicked , and then i thought carefully , and deleted some extensions , that resolved it
You should have a folder for lora, look at this video how i download and where i put them ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-q5MgWzZdq9s.htmlsi=nKX2enJ7KPEAoGIF
Not sure if it works on linux, i saw it search for nvidia driver, so it needed that card but I'm not sure if it works on Linux, i tested only on windows, maybe someone else can help
@@pixaroma I am starting to think the vast majority of users of all these SD webui platforms are on Windows. But really my issues are with python versioning and venvs. I'm fully confident Forge will work as soon as I can sort out the python version stuff.
how i get same "image" when changing like shirt color it change whole image bcoz promt change. img2img not do it LOL my human could end looking cute cat LOL
Little question: did anyone have a problem with LyCORIS models on Forge? I'm using Forge through Stability Matrix, and no matter how I load -- from my computer or Matrix's models loader -- it just doesn't show up in Lora's tab. And when I load it in Lora's folder, it doesn't work correctly.
How do I add the "ip-adapter_face_id_plus" preprocessor for IP-Adapter? It's not in Forge. "ip-adapter_face_id_plus" working better then "InsightFace+CLIP-H (IPAdapter)"
Check this video ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-q5MgWzZdq9s.htmlsi=f2mLLk3-B3bUh0Ri you can edit q bat file but careful on path there is a difference between slash and backslash that you also need to change
@@ZeroCool22 i think it was some problems with ad detailer, and some extensions. For control net for example for me only works if image size width and height is divisible by 64. But just try for things that works and work faster and use a1111 or other for things that doesn't work :)
I never did it but someone commented with this on reddit Use command prompt in SD directory and type git revert or git reset --hard . You can find the previous version hashes using git log, or there is a list somewhere on the github. So for forge probably you have to go to forge and then to web ui folder, then in the address bar type cmd and press enter, then you can see all those commit with a string of numbers, then I am not sure the next part either you use git revert and put that comit number or something like that
Very helpful, i will be watching all the videos in this playlist, thanks! BTW what do you use for your voice It's great.( If it's not a trade secret that is)
Mostly in optimization on how it handles memory, so it generates images faster then a1111 and have some extra things, but it stopped being updated officially so i switched now to comfy UI
Thankyou for this tutorial! ❤Do I need the automatic 1111 stable diffusion installed to be able to install forge? I have the oldest version of automatic 1111 installed and I hvnt used it or upgraded it as I cudnt keep up with the every new update and other troubleshooting issues as I hv zero knowledge of programming language 😢
My built-in controlnet's IP-Adapter is missing its models, and thus, doesn't work. Any ideas? I wanted to install them manually, but the library is different, and so are the files.
Forge still has some problems with control net, check this discussion maybe it helps github.com/lllyasviel/stable-diffusion-webui-forge/discussions/178#discussioncomment-8572388
hi my installed Sd forge doesn't have the update.bat file. Is there anyway to update SD forge without the file? Maybe by adding arguments to look for an update?
I explain it in this video, is a file i created that you can download and put it in the right folder ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-UyBnkojQdtU.html
the real tutorial we need is how to run ForgeUI using colab, there are currently no colab notebooks provided for it.... some of us have colab pro subscriptions and want to run this on the big boy GPUs
I haven't used colab for months because I upgraded my PC, i just share tutorials on how I use it and the knowledge I got so far, sorry i can't help more
Hello, really great tutorial. I have a question I want to use the 4xVALAR upscaler but have no idea where to put it. Could you please answer if you have an idea in what folder exactly it should be.
Go to your webui\models folder and there create a folder named ESRGAN, so you will have webui\models\ESRGAN path, and in that ESRGAN you put that upscaler model. That worked for me, hope it works for you.
thanks for the controlnet section I got stumped on where to put the models. Edit: can anybody help me with LORA models, I paste them in the LORA folder but FORGE doesnt seem to detect them
i just tested now with a file and seems to be recognized, webui\models\Lora so the folder is Lora, and after you past it there you go to lora in the interface and refresh the page or just restart the stable diffusion so it can see it
@pixaroma Owww... I understand, I haven't seen the other videos yet, i dint have time, I came here to see the correct folder to paste the file. Do you have any tips for me? I have a Ryzen 5600g with 32GB RAM and a 3060 with 12GB. What is the best SD for me to install here?
I didnt play with that function yet, it always seems to be complicated to do trainings, I tried also on A1111 but I dont get always good results, needs good settings, good images, captions, too many things involved it seems. And now I saw an anoucement that forge is not going to be updated anymore, like is used more for tests or something.
1. Is there any performance drop if I don't install it on C drive? 2. My C drive is SSD and D drive is HDD. Can I still install on D drive? Will I face any performance difference or issues?
I use forges deforum tab to create animations. I would like to know how to create the animations within a boundary. I projection map so I would like to know how to keep the animations within the map of my house. Would you know how to accomplish this? I have a png map file that I created but unsure what to use it with. TIA
Sorry, I didn't play yet with deforum, so i can't help there yet, I like to create HQ images and the video and animation isn't quite there yet, i am waiting for an improvement before i jump in to it
Usually those from automatic are also on forge, but not sure if all works, you can try and test it, i dont usually use outpaint because it doesn't always do a good job, for that i prefer Photoshop generative fill
From your video with the purse, and the drinks can in the desert, I understood that Inpaint Background took account of, say the lighting, in the masked-out subject when creating a completely different background, as compared with a simple remove/replace background ignoring the masked area. Have I misunderstood? Does Photoshop Generative Fill allow a completely different background prompt, or only an extension of the existing image within a larger canvas?
@@johnclapperton8211 when you do with inpaint it look around to be able to paint better, but is not always perfect. For photoshop when you expand with crop it does automatically, but after you can select that part that was generated and give it with the prompt what you want in there
@@pixaromathank you, hope that some one can answer. I don't have the requested performance machine to do local installation, so that will give me a great help. why I'm asking? it's just for the seamless pattern setting that exists in the models presented. this capability isn't offered right now in fooocus witch is easily accessible with colab.
Yes, only a few things are different, but is based on A1111 just work a little faster and some things are different. I used it because it doesn't crash like a1111 and is a little faster, better optimized
You can download more models from civitai website and depends in model some need different vae settings or models, thr juggernaut x i use it has embedded vae and my vae setting is set to auto. Sorry i am not at the computer this weekend to check exactly but if you do exactly like in the video it should work
@@pixaroma thank you so much. I think that I have to go deeper in learning because with same prompts I have spectacular results with midJ and disaster with Forge 🤷♂️
Flux nodels can do that most of the time without fixing, so if your video can handle flux you should try. I have video on flux for both forge and comfyui, just forge is still work on progress, works with flux but other things still don't work yet, they are changing the interface and need time to fix all the things
@@pixaroma Yes, you are right, but in this case I talked about only SD. Flux can handle hands and eyes pretty well but when two hand are close or touch each other, it makes fault. Anyway, I suppose it will be fixed very soon like everything else in AI realm :)
@@CsokaErno with sdxl I got ok results if i used control net, there are some more nodes that can be used, I will see if I get enough info to get an episode about that in the future
great info. Also a quick tip, below the image there is a button to upscale using hires fix, just a quicker way to do it. 09:25 I think that option is new with Forge, wasn't in A1111
Using RUN I got the following error: "Torch is not able to use GPU: add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check" Does this mean my GPU will not work for this? I have a GTX 1660 Ti with 6GB GDDR6. Not a new card, but it's getting me by. No good?
I am not sure, try to add what they recommend, add --skip-torch-cuda-test to COMMANDLINE_ARGS, so copy --skip-torch-cuda-test and then open the webui-user.bat with notepad and paste it there after the equal sign, where it says set COMMANDLINE_ARGS=, should be set COMMANDLINE_ARGS= --skip-torch-cuda-test and see if that works. For me it worked on 6gb vram but was on rtx4090, is still a new UI so it might still have bugs.
I received same error and tried suggested command. Restarted run.bat and got new error saying no NVIDIA driver on your system. Makes sense because I have AMD video card. Simply put this is only for NVIDIA users.
Sorry i don't know how to fix that, maybe it has something to do with the path where you saved, maybe put in a folder without space in the name, not sure why you got that error, i tried on two computers and it worked for me
I don't think so, is a different interface based on ,a1111 but is made by different users, so is not an update is a different UI. Also seems that works only with Nvidia card
@@pixaroma Thanks! yeah i do have a Nvidia card and seems like everyone using this one. Mine, web ui looks really outdated and have many things missing.. Looks like i have to install all over again? should i uninstall the SD web ui?