If anyone watching this because you've received the line : 'No module 'xformers'. Proceeding without it.' when running the webui-user.bat file, all you need to do is open the file in notepad++ and modify your set COMMANDLINE_ARGS line to the line below (NOTE: YOU MUST INCLUDE THE QUOTES, it will not work without them): set COMMANDLINE_ARGS= '--xformers'
LoRA does not add new layers to the original model. Instead, it introduces additional weights in a low-rank decomposition format and integrates them into the existing layers of the model.
It is great point but in many cases of the ethical issues need to based on the group which at least more than 2 people, group size, timeline, social structure, to the country survival and value system which is multiple levels of complexity. In capitalism, money has been the driving force and the norms of evaluation system to identify value which might not lead to the best outcome as how we had been trained to evaluate ethical standards.
Is there a way to run Automatic1111 or ComfyUI locally with ControlNet while abstracting the Stable Diffusion layer using an API to Hugging Face? The idea is to run the user interface (Automatic1111 or ComfyUI) and ControlNet locally on my machine while offloading the heavy lifting (the actual image generation by Stable Diffusion) to an API like Hugging Face. I just want to benefit from the flexibility and control offered by ControlNet while not being limited by local hardware for the image generation process.
I think usage and budget have a big affect on what you should use/have. 1. Even in 2024, there is currently no direct equivalent Stable Diffusion system that works great on AMD GPUs. This is why with SDXL, a 8GB RTX3060 is on par with a 16GB RX7900XT. 2. Not everyone has $500-1200 for a single GPU, and they may be just playing around with it, not using it professionally. This is where a 8GB RTX3050 ($220-260) can be used, taking ~30 sec per image, or a refurbished 12GB 2060 a little faster than the 3050 for less money (under $200).
Idk maybe it made the model schizophrenic maybe this is how schizophrenia works in humans too maybe learning stable diffusion more and more with the context of human pshyce and thinking process will make literall sense but yeah let's see how it goes but a really great work buddy loving it thank you
Wow! This is the best Explanation I have read/heard and I have been looking through alot of papers on Aesthetic personalization. I am a postgraduate researcher writing a paper on this. Is there a way to get in touch soon-ish? my use case is very specific and I could really use some input.
I would say it is the difference between Straciatella icecream and vanilla ice with chocolate sprinkles, both are vanilla icecream but one type straciatella you trained the whole icecream to be different and whatever you do you will allways get straciatella and it would be a hell of a job to only get vanilla out of that box, where as regular vanilla, you can add the chocolate sprinkles and add it to where it is satisfactory to you or just omit them according to your taste, but it's a lot faster, because you just add a bit to the big model without losing the big model, whereas in straicatella you change the model to something completely different and you can't finetune it anymore
I appreciate the technical explanation but, in all honesty, I boiled it down after a single picture: ControlNet prioritizes image-to-image for denoising the subject, and text-to-image fills in the details.
I'm not sure the numbers about Dreambooth downloads are accurate. It seems he got that number from the number of "checkpoints" (aka full size models) downloaded from civit ai, but I'm not so sure most of those are made with Dreambooth, a lot (if not the majority) are model merges which is not the same thing. Just thought I'd mention that.
There's a few different ways to remove backgrounds in the stock AUTOMATIC1111 interface. Is there a reason you downloaded an additional script? Are the stock functions not working? Just curious, because while I was previously able to change backgrounds now it seems I get a mixed bag of results, which by no means compare to your time of 11 minutes.. Would be interesting video or just a comment as to why this is the background method you're choosing. Thanks for the howto bro. Peace
I couldn't follow these instructions because the launch.py file calls a separate modules\launch_utils.py and the setup of that file is different than here...BUT... if I just added the argument --xformers to the webui(.bat) command line, it did the same thing... so just launching SD by typing 'webui --xformers' without the quotes
This worked for me thanks! I added 'webui --xformers' in to the file 'webui-user.bat' so it is automatically run when SD is started. As a sidenote, the reason why I needed xformers: My 24GB RTX 3090 was running out of memory in images even with dimension under 500x500, but now I can upscale no problem even 100x size images
it is a fat dislike for this kind a tutorial. Looks you are smart enough. But why did you place your talking head in upper left corner ? That is a big fail.