oh man I could've used this last week! To get img2img working I followed your original tutorial and was still so new to python that I just altered one of your original batch files to run the img2img script, I got it running no problem with -n_samples 1 and was able to run hundreds of iterations with no errors at 512x512! (on a 3080) thanks for making these videos!
Did exactly like you showed in last video but was getting "No such file or directory: 'model 1.3.ckpt". It was really frustrating .But now its working fine. ❤Thank you so much for this brother.❤
12:33 The img2img input image resolution will be used if its larger than your default. You may need to shrink it down to what your GPU can handle to avoid a RAM error. I tried using MS Paint to resize by pixel, dragging images onto a pre-set photoshop canvas worked better.
If you get cuda out of memory error after --precision full try reducing samples with --n_samples 1 or width with --W or height With --H also make sure dimensions are multiples of 64 thx @neonxxp
@@BensMiniToons I don't know if this is a bug but often for example if I write as a prompt "blue eyes" and get a result with blue eyes, then make a different prompt without specifying eye color it will keep making people with blue eyes when I would prefer it to be random. I noticed this for many type of prompts that it will keep including things from previous prompts, how do I make it not remember past prompts when making new images?
I know that this is not your usual content, but I would be immensely grateful if you'd keep us posted on the development of these scripts. I am having a blast generating images. I am able to generate 832x640 images with my RTX 2070 SUPER. I am seriously considering upgrading my card, since the jump from 512x512 to 800x600 was tremendous in terms of quality.
I found that quality was better too. I am waiting till December for the RTX 4000 series cards. But by then memory needed may be less. keep learning prompts and give it a few days. I spent a day looking for 20Gb Vram cards for $1,500 to realize its not worth it till the RTX4090 comes out. and will be cheaper due to over stocking after the GPU mining crash.
@@fostena The RTX 2070 super you have now is 205 watts the RTX 3070 Super is 220watts both GPU's are recommended 650 watt or greater power supply. Also All GPU's are backwards compatible with PCI-e ports. So a 3000 series will work long as your it fits in your computer case. Last thing is A faster GPU will not make bigger images. Its all GPU ram size. "VRAM" or video ram.
@@BensMiniToons Thanks for the reply. I am aware that VRAM is the most important thing. I observed how memory hungry these processes are. In fact I want to upgrade my GPU precisely because I want more VRAM. My dream is to upgrade it to at least 16GB
I thought the way it works is, to only use 512 x 512 since the AI was trained with images in that resolution... then use the Upscaler to get bigger resolution... Or do you need the AI to not generate a 512 x 512 image?
Did anyone try to add the GFPGAN argument? After creating the image it says "GFPGAN not initialized, it must be loaded via the --gfpgan argument" Do you have any idea why it gives that error? PS I'm using the "SD HighRam" bat with the "init_img" argument.
Can we get updates on more scripts made by you with more functions? I really love the way you made the interface! Is there currently a way for me to render different different steps, let's say from 100 to 120, without having to go -s101, -s102, -s103...?
Tip based on my experience. If you don't see the (base) in the Anaconda prompt it's installed wrong. Also I had to create a new windows user (without a " " space in the username) to get it working
Can anyone help! Is the -f command supposed to allow your subsequent images to change either completely at 1.0 or stay the exact same at 0.0? I can't retain cohesiveness when trying to generate multiple images. Even at -f 0.0 it looks vastly different
Perfect! Just having an issue forcing number of steps as it seems somewhat locked to -f strength of init image, only doing 37 steps, which was fine a the start, any idea what I might have added?
Thanks so much for your hard work and tutorial. Even if it's not your ordinary content, is there a chance we'll get to see a low-ram inpaint and tutorial? I feel like it would be useful for art.
This was awesome! Could you make a video explaining how to set up iterations with image2image? Like a loop constantly using the new SD output as input or gradually increasing the strength parameter on a photo. Thanks!
I think my username having a space in it is causing it to not read things correctly when running the scripts, seeing as it throws up is "[USERNAME] is not recognized as an internal or external command, operable program or batch file." Any workarounds, without having to change my username, so that I can actually use the low end SD option?
First,Thanks for these guides! They have worked great. I actually have the high ram scenario working on a 2060 and a 1070 gpu. one question on the high ram img2img instance...when I try to set -s 50 ....it still only runs steps at 37...why do you think that is?
Your comment: If you get cuda out of memory error after --precision full try reducing samples with --n_samples 1 or width with --W or height With --H My question : We can put this in the Google Colab? Where? thanks a lot!
I couldn't get it to work with your tutorial but since then there's a grisk gui out that does everything for you with no coding no commands and it worked first time
I was waiting for this video. Thanks a lot for the img2img. Only one problem: now I can't use the --outdir to choose where the image will be saved, there is a way to have it?
Question from the Newb audience: I got it working from you last video, and I changed that line you showed to make it "more efficient" so It would run on 8gb vram (3060ti) - I have a fear, especially when I am out of my depth (which I am here) of breaking things. I am FINE with how it works now. Will it still run on on 8gb without having to set the HxW manually if I do this update?
hey thanks for the videos .. and is there a way to do another prompt directly after finishing a prompt ? because i dont want to reload the model every time i make a prompt it takes more time like that
I keep getting the "no module named..." errors when trying to run Stable Diffusion. It started with "no module named cv2," and then after installing opencv, it became "no module named omegaconf." After working through about 10 of these errors, it just gave me a syntax error. I've tried reinstalling everything, recreating the conda environment, and reinstalling anaconda, but nothing has been able to fix it. Would you have any pointers you could give me?
This is actually a common error pattern when the conda environment doesn't get installed properly I'd say start over from the beginning and make sure to watch the last video from the very beginning and go through slowly there's probably a particular step you keep missing
I have the same problem! Perhaps it's due to a download issue. When you create the environment, does it freeze quite a moment with a few modules at 100% and then it continues? Perhaps it's just missing stuff there..
I found the solution, for me it was a problem with the dependency of -git. I wrote it in the environment.yaml without aligning the dashes in the notepad. I corrected it , it debugged the environment creation and now it works. ^^
@@tingtingin i've got the same problem, and i tried to reinstall everything for 3 times now step by step carefully from the beggining but nothing works...
Hi guy. This is working amazing, but I got a problem and maybe you can help me. When using a prompt too long it will instantly fail to create the output folder for the images and end the process. Some of the most beautiful results require a lot of words in the prompt. Is there any way to change how the output folders work so no folder is created with the prompt name (or a shorter version)?
So, I discovered how to fix it. Basically windows 10 had an update that allows to change that restriction in naming folders, I quickly found a guide in google to eliminate that restriction. Thanks for your amazing work.
any fixes for this error on providing an initial image? UnicodeDecodeError: 'utf-8' codec can't decode byte 0x89 in position 0: invalid start byte im running on mac m1
Thanks for all links and video . I`m happy with Google Collab version . I wonder about Diffuser Experiment part , is that for enhancing generated results? Thanks
Also note for some users when installing all the packages it gets struck at the "installing pip dependencies" for a long time. so I had to manually install each of them. Is there any way to ease them?
TingTingin, I don't know if this is a bug but often for example if I write as a prompt "blue eyes" and get a result with blue eyes, then make a different prompt without specifying eye color it will keep making people with blue eyes when I would prefer it to be random. I noticed this for many type of prompts that it will keep including things from previous prompts, how do I make it not remember past prompts when making new images?
Hello, I'm following your video and getting to the "conda env create -f environment.yaml"step and receiving this error: Collecting package metadata (repodata.json): done Solving environment: failed ResolvePackageNotFound: - torchvision=0.12.0 - pytorch=1.11.0 - cudatoolkit=11.3 tried removing everything and going through the process fresh 3 times now and same error every time
Thank you for tutuorial, it's great! I tested LowRam with prompt "cat" and my PC freezes on line "Loading model from model 1.3.ckpt". How many computer memory I need to run this? I only have 8gb.
hi i tried this today and keep getting the error that says no nvidia gpu detected no nvidia drivers detected. is there anyway to fix this? ive run the tutorial several times
Hey, is there a way I can see which seeds that have been used? I use the lowram version img2img and it always starts with global seed 42 but when I do multiple outputs I expect it uses different seeds but it doesn't seem to show them in the prompt.
So was any GPU, any Nvidia GPU? or are we talking AMD GPUs aswell? I'm getting a runtime error no Nvidia driver found. I have a big sad. Regardless of my issue, you make a pleasant tech walkthrough.
Hey Ting, just wanted to see if you were interested in getting involved with the latent space walk animations that can be made using Stable Diffusion. Andrej Karpathy apparently has a notebook on it on HuggingFace. I haven't looked into it yet, but most of this is over my head and your tutorials have been very helpful!
it doesn't seem to be working on Radeon GPUs i have a Radeon RX 570 and I get this Error: "RuntimeError: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from [link]"
I followed your previous instructional video for running SD locally and had great success. Today I followed your instructions here but am now having the same problem as many others with only a green square being the output. I can't seem to get the precision command to work with the updated prompt system. Is there a workaround for this? Thank you!
Thank you! I got it working from your previous video, but it tops up my 12Gb on high_ram on default resolution - had to go --W 384 --H 512 and it's almost tops it, but not enough to stop.
Did you download the new files in this vid? The base SD direct from the SD devs has samples set to one at the default which is too much for most systems you can fix by using --n_samples 1 however the new script doesn't have this issue and should work out the box
Can you please advise on how we get the 6 image grid back by default? I much prefer the defaults on the last video and now this only makes one image at a time.
You know if Img2img doesn't work. Remember that your picture name must be one word and have dot whatever file type it is. Example: Icecream.png or .jpg
Thanks for your tutorial, but there is something I don't understand: How can I use full precision with SD HighRam ? Adding "-precision full" to the end of my prompt isn't recognized as an existing argument.
NVIDIA GeForce GTX 750 Tï Is it going to work on this? The free stable? 18Gb Dedicated 2 Shared 16GB ? Thank you. PS. You also mention to delete everything id there is an error. I can uninstall Anaconda easily, but how do you delete all the other stuff that got downloaded and installed? Where do you find that? I am trying to clean PC with AMD.
According to Google the 750Ti only has 2GB of GPU RAM, 8GB of GPU RAM is advised. You might get something using the low RAM version and setting low image dimensions.
about all the changes you talk about at 1:30 wouldn't it be easier to just put the stuff on a github repo? so people can watch the project and get notified when changes are made?
I have hlky's latest distribution installed and I noticed it includes "optimizedSD" (the version that uses less vram), but there doesn't appear to be any instructions regarding it. Do you happen to know if this is something that is automatically embedded? I still get memory errors if I raise the resolution any higher so I'm not sure if it's working or how to use it.
Yes, it's embedded via the .bat file, so you don't really have to touch optimizedSD once it's in place. Also, I'm not sure what resolution you want your images to be, but the modus is to generate them small with 512ish and then use another AI, a free image upscaler, to increase the resolution to 1024, 1920, etc
Great video, so much helpful. I'm running the LowRam .bat in a 4Gb 1050ti and a single sample takes just about 3 minutes with the same quality as the DreamStudio. Just two questions: it's a way to manually set the name of the output directory that doesn't use the given prompt text? because large prompt texts cannot be saved as a directory name. It's possible to get a txt file with all the info of the image generated (seed, prompt, etc.)?
@@Dmitriy108V Hi, by now I always use --outdir and --from-file to manage prompts and directories. Son first question is resolved. But no way to save the log using optimized sd executables. I noticed that dereams.py have a function called write_log_message that allows it, but it's no implemented in optimized_img2img.py nor optimized_txt2img.py.
@@flipation that has to be with windows file name lenght limitation or something. I just solved it completely by just moving SD folder from c/users/username/stablediffusion to d/stablediffusion. It must not be in users subfolder.
Hi, i just discovered Stable Diffusion so i am a total noob. I also have a 1050ti and Will download the lowram version, i want to know if this version is capable of img2img
Hi! I've followed all the steps and it's working BUT I have this error "RuntimeError: CUDA driver initialization failed, you might not have a CUDA gpu". I have a Nvidia Quado K5100M (8 gb of VRAM) and I'm able to run other programs using that GPU. Does anybody knows what's going on? I'm so frustrated :(
i get this error: "RuntimeError: [enforce fail at C:\cb\pytorch_1000000000000\work\c10\core\impl\alloc_cpu.cpp:81] data. DefaultCPUAllocator: not enough memory: you tried to allocate 14745600 bytes." in the low ram version with the command: " --W 128 --H 128 --n_samples 1 --prompt "dog photo" " width and height are multiples of 64 and i have 6gb of GPU memory (GTX 980TI) so it should work right? i dont know why i get this message. i spent hours to get to this point and now its not working in the last step :(. would be glad to get a tip from you because i dont have a clue why its not working. PS: i Got some errors earlier in the installation process but this video helped with everything else. Thanks for your awesome Help and Tutorials. also i really like your content style, its verry unique and quite interesting/entertaining to watch. keep it up im sure youll get a lot of viewers in the future!