Тёмный

Stable Diffusion Install Fixes Tips And Tricks and Img2Img 

TingTingin
Подписаться 6 тыс.
Просмотров 16 тыс.
50% 1

Опубликовано:

 

31 окт 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 213   
@danknation9135
@danknation9135 2 года назад
I came for stable diffusion, I stayed for TingTingin
@Nickknows00
@Nickknows00 2 года назад
With the amount of new developments in ai coming out each week, I would actually love if you did more ai videos, you are super helpful
@OwenPrescott
@OwenPrescott 2 года назад
he should create a second channel, he could easily become a go to source for AI stuff on RU-vid
@TheGat2012
@TheGat2012 2 года назад
honestly I've tried like 5 different versions and doing it the way he showed in his last video is the only way I've gotten it to work so far.
@padinja89
@padinja89 2 года назад
I agree! Ive tried so manso many versions and everything that I nearly gave up, but thanks to Ting I got this working! So freaking helpful!
@ItsRoov
@ItsRoov 2 года назад
Hey thanks for this video as well, found you via the last SD video, but love your other content, so please keep making it - gonna stay subbed!
@infinitehush
@infinitehush 2 года назад
oh man I could've used this last week! To get img2img working I followed your original tutorial and was still so new to python that I just altered one of your original batch files to run the img2img script, I got it running no problem with -n_samples 1 and was able to run hundreds of iterations with no errors at 512x512! (on a 3080) thanks for making these videos!
@lastyhopper2792
@lastyhopper2792 Год назад
what did you alter?
@SgtIrradiated
@SgtIrradiated 2 года назад
Dude you are literally one of the best RU-vidrs to listen to and watch, you have extremely large potential to grow.
@dreamzdziner8484
@dreamzdziner8484 2 года назад
Did exactly like you showed in last video but was getting "No such file or directory: 'model 1.3.ckpt". It was really frustrating .But now its working fine. ❤Thank you so much for this brother.❤
@dreamzdziner8484
@dreamzdziner8484 2 года назад
@@chogeth2798 I just removed the already created ldm conda environment as shown here. Then did everything like before. Hopefully it will work.
@nebaprincewill7032
@nebaprincewill7032 2 года назад
With all the soft soft tutorials that exist on YT, yours just created that "light bulb illumination" mont in my head. Thanks for taking the
@kessel18
@kessel18 Год назад
thanks for the prompt so I could skip to the park I'd like to know about,very thoughtful,thank you.
@avmsteve
@avmsteve 2 года назад
Thank you, The "fixes for common errors", sorted it for me
@Armitage1982
@Armitage1982 2 года назад
No worries mate, your videos are dynamic and fun. Keep it up ;-)
@jullevv
@jullevv 2 года назад
thank you a billion times for this video, it works now!
@MysteryFinery
@MysteryFinery 2 года назад
dude, you're perfect for this. you gotta grab your opportunity.
@nihatk9483
@nihatk9483 2 года назад
That was an amazing video and I feel like I can actually get started using soft soft. Thank you so much!
@NeonXXP
@NeonXXP 2 года назад
12:33 The img2img input image resolution will be used if its larger than your default. You may need to shrink it down to what your GPU can handle to avoid a RAM error. I tried using MS Paint to resize by pixel, dragging images onto a pre-set photoshop canvas worked better.
@elysiryuu
@elysiryuu 2 года назад
IIRC you just tell SD to apply --h 512 --w 512 and it'll work with larger pictures (SD adjusts it)
@tingtingin
@tingtingin 2 года назад
If you get cuda out of memory error after --precision full try reducing samples with --n_samples 1 or width with --W or height With --H also make sure dimensions are multiples of 64 thx @neonxxp
@NeonXXP
@NeonXXP 2 года назад
Make sure your dimensions are multiples of 64 to avoid errors.
@richvip93
@richvip93 2 года назад
We can put this in the Google Colab? Where? thanks a lot!
@jasonstetsonofficial
@jasonstetsonofficial 2 года назад
Video on how to make textual inversion work with low ram ???? 😍😍
@NeonXXP
@NeonXXP 2 года назад
Top tip using the updated version - Press UP arrow key to enter previous input.
@lutosfera5008
@lutosfera5008 2 года назад
Stable diffusion 1.5 please!! Thank you for your work!
@androsforever500
@androsforever500 2 года назад
Please do more videos on Prompts and other tricks to get more realistic and detailed results etc.!! Big like on these videos you're making!!!
@BensMiniToons
@BensMiniToons 2 года назад
Try adding, , global illumination, ray tracing, realistic shaded, Extreme detail light, Ultra realistic, or using one or two with the same seed
@androsforever500
@androsforever500 2 года назад
@@BensMiniToons I don't know if this is a bug but often for example if I write as a prompt "blue eyes" and get a result with blue eyes, then make a different prompt without specifying eye color it will keep making people with blue eyes when I would prefer it to be random. I noticed this for many type of prompts that it will keep including things from previous prompts, how do I make it not remember past prompts when making new images?
@peoplez129
@peoplez129 2 года назад
TingTing is the tech KingKing.
@izhguzin
@izhguzin 2 года назад
finally no censorship!
@LunacyMoon
@LunacyMoon 2 года назад
Great Video! keep up the good work, just wondering is there a way to change the directory of the of the anaconda in C/users drive?
@anisghattassi
@anisghattassi 2 года назад
You're right. Understand bit by bit, then try to make a simple soft. You will learn as ti goes on. Even if it's havoc, don't worry, co
@fostena
@fostena 2 года назад
I know that this is not your usual content, but I would be immensely grateful if you'd keep us posted on the development of these scripts. I am having a blast generating images. I am able to generate 832x640 images with my RTX 2070 SUPER. I am seriously considering upgrading my card, since the jump from 512x512 to 800x600 was tremendous in terms of quality.
@BensMiniToons
@BensMiniToons 2 года назад
I found that quality was better too. I am waiting till December for the RTX 4000 series cards. But by then memory needed may be less. keep learning prompts and give it a few days. I spent a day looking for 20Gb Vram cards for $1,500 to realize its not worth it till the RTX4090 comes out. and will be cheaper due to over stocking after the GPU mining crash.
@fostena
@fostena 2 года назад
@@BensMiniToons I'm keeping an eye on the 3000 SUPER lineup. If they don't require too much power (and are supported by my Motherboard) I'll upgrade.
@BensMiniToons
@BensMiniToons 2 года назад
@@fostena The RTX 2070 super you have now is 205 watts the RTX 3070 Super is 220watts both GPU's are recommended 650 watt or greater power supply. Also All GPU's are backwards compatible with PCI-e ports. So a 3000 series will work long as your it fits in your computer case. Last thing is A faster GPU will not make bigger images. Its all GPU ram size. "VRAM" or video ram.
@fostena
@fostena 2 года назад
@@BensMiniToons Thanks for the reply. I am aware that VRAM is the most important thing. I observed how memory hungry these processes are. In fact I want to upgrade my GPU precisely because I want more VRAM. My dream is to upgrade it to at least 16GB
@lastyhopper2792
@lastyhopper2792 Год назад
I thought the way it works is, to only use 512 x 512 since the AI was trained with images in that resolution... then use the Upscaler to get bigger resolution... Or do you need the AI to not generate a 512 x 512 image?
@cidshroom
@cidshroom 2 года назад
Love the outfit, and the accent, reminds me of when youtube used to be more fun
@marcokneisel1145
@marcokneisel1145 2 года назад
good fixing tutorial for "ALL GPUs" ;)
@silentkunai1
@silentkunai1 2 года назад
Woah, great video mate!
@ozguraltay
@ozguraltay 2 года назад
Did anyone try to add the GFPGAN argument? After creating the image it says "GFPGAN not initialized, it must be loaded via the --gfpgan argument" Do you have any idea why it gives that error? PS I'm using the "SD HighRam" bat with the "init_img" argument.
@MrEtnavyguy
@MrEtnavyguy 2 года назад
I want to know the answer as well
@androsforever500
@androsforever500 2 года назад
Can we get updates on more scripts made by you with more functions? I really love the way you made the interface! Is there currently a way for me to render different different steps, let's say from 100 to 120, without having to go -s101, -s102, -s103...?
@FLEXTORGAMINGERA
@FLEXTORGAMINGERA 2 года назад
Whatsup thanks for the video 👍 ( it's me flex )
@OwenPrescott
@OwenPrescott 2 года назад
Tip based on my experience. If you don't see the (base) in the Anaconda prompt it's installed wrong. Also I had to create a new windows user (without a " " space in the username) to get it working
@mytyavarivoda7710
@mytyavarivoda7710 2 года назад
awesome suit man
@dennisS0500
@dennisS0500 2 года назад
thanks man, it worked. Now I have to figure out how to make vtuber lewd fanart.
@hackandtech24
@hackandtech24 2 года назад
you need to do vids of gpt 3 and other ai setups too dude. honestly. you will capitalize a lot
@dictater_tots
@dictater_tots 2 года назад
Can anyone help! Is the -f command supposed to allow your subsequent images to change either completely at 1.0 or stay the exact same at 0.0? I can't retain cohesiveness when trying to generate multiple images. Even at -f 0.0 it looks vastly different
@selftransforming5768
@selftransforming5768 2 года назад
Perfect! Just having an issue forcing number of steps as it seems somewhat locked to -f strength of init image, only doing 37 steps, which was fine a the start, any idea what I might have added?
@caramel7149
@caramel7149 2 года назад
Thanks so much for your hard work and tutorial. Even if it's not your ordinary content, is there a chance we'll get to see a low-ram inpaint and tutorial? I feel like it would be useful for art.
@jackieclan815
@jackieclan815 2 года назад
Yeah make more how to stable diffusion videos please! Like how to add different weights and what not!
@ilikewater666
@ilikewater666 2 года назад
Do we not need to change pytorch-lightning to 1.5.0 in the new environment.yaml?
@driesmichiels4184
@driesmichiels4184 2 года назад
This was awesome! Could you make a video explaining how to set up iterations with image2image? Like a loop constantly using the new SD output as input or gradually increasing the strength parameter on a photo. Thanks!
@tingtingin
@tingtingin 2 года назад
this is a feature that will be added to the dream.py script by the dev eventually
@driesmichiels4184
@driesmichiels4184 2 года назад
@@tingtingin ok great! Thanks a lot!
@kalebdavis7080
@kalebdavis7080 2 года назад
I think my username having a space in it is causing it to not read things correctly when running the scripts, seeing as it throws up is "[USERNAME] is not recognized as an internal or external command, operable program or batch file." Any workarounds, without having to change my username, so that I can actually use the low end SD option?
@dilshanonlive7413
@dilshanonlive7413 2 года назад
TOP
@selftransforming5768
@selftransforming5768 2 года назад
Indeed the only issue is not being able to control -F strength of the image as its messing with the steps, any idea? :(
@tummyplatter6688
@tummyplatter6688 2 года назад
realy useful
@allenwilliams4736
@allenwilliams4736 2 года назад
First,Thanks for these guides! They have worked great. I actually have the high ram scenario working on a 2060 and a 1070 gpu. one question on the high ram img2img instance...when I try to set -s 50 ....it still only runs steps at 37...why do you think that is?
@agarwalpublicintercollege7890
@agarwalpublicintercollege7890 2 года назад
Is the master channel of the setuper peaking into the red? If you have any peaks over 0dB it will distort. Please do let know, We will get
@richvip93
@richvip93 2 года назад
Your comment: If you get cuda out of memory error after --precision full try reducing samples with --n_samples 1 or width with --W or height With --H My question : We can put this in the Google Colab? Where? thanks a lot!
@theofficialstig
@theofficialstig 2 года назад
I couldn't get it to work with your tutorial but since then there's a grisk gui out that does everything for you with no coding no commands and it worked first time
@enderay9060
@enderay9060 2 года назад
I was waiting for this video. Thanks a lot for the img2img. Only one problem: now I can't use the --outdir to choose where the image will be saved, there is a way to have it?
@hoaxygen
@hoaxygen 2 года назад
He listed the commands briefly and I think it's -l now (not sure if an 'i' or an 'L')
@enderay9060
@enderay9060 2 года назад
@@hoaxygen that is for the input, i'm talking about the output
@MontroseChloe
@MontroseChloe 2 года назад
Question from the Newb audience: I got it working from you last video, and I changed that line you showed to make it "more efficient" so It would run on 8gb vram (3060ti) - I have a fear, especially when I am out of my depth (which I am here) of breaking things. I am FINE with how it works now. Will it still run on on 8gb without having to set the HxW manually if I do this update?
@tingtingin
@tingtingin 2 года назад
Copy the folder add the the new files in the copy if it works great if not delete copy and go back to old version
@MontroseChloe
@MontroseChloe 2 года назад
​@@tingtingin It works perfectly - thank you so much!
@lordkiraayt2803
@lordkiraayt2803 2 года назад
hey thanks for the videos .. and is there a way to do another prompt directly after finishing a prompt ? because i dont want to reload the model every time i make a prompt it takes more time like that
@frozenhamburgers9925
@frozenhamburgers9925 2 года назад
I keep getting the "no module named..." errors when trying to run Stable Diffusion. It started with "no module named cv2," and then after installing opencv, it became "no module named omegaconf." After working through about 10 of these errors, it just gave me a syntax error. I've tried reinstalling everything, recreating the conda environment, and reinstalling anaconda, but nothing has been able to fix it. Would you have any pointers you could give me?
@NeonXXP
@NeonXXP 2 года назад
When this happened to me I uninstalled anaconda and start again using the files already downloaded.
@tingtingin
@tingtingin 2 года назад
This is actually a common error pattern when the conda environment doesn't get installed properly I'd say start over from the beginning and make sure to watch the last video from the very beginning and go through slowly there's probably a particular step you keep missing
@R-tex-xm8vt
@R-tex-xm8vt 2 года назад
I have the same problem! Perhaps it's due to a download issue. When you create the environment, does it freeze quite a moment with a few modules at 100% and then it continues? Perhaps it's just missing stuff there..
@R-tex-xm8vt
@R-tex-xm8vt 2 года назад
I found the solution, for me it was a problem with the dependency of -git. I wrote it in the environment.yaml without aligning the dashes in the notepad. I corrected it , it debugged the environment creation and now it works. ^^
@ZeRcHRules
@ZeRcHRules 2 года назад
@@tingtingin i've got the same problem, and i tried to reinstall everything for 3 times now step by step carefully from the beggining but nothing works...
@bbbbbb4107
@bbbbbb4107 2 года назад
I'm getting AssertionError: Torch not compiled with CUDA enabled
@diosaangelatop
@diosaangelatop 2 года назад
Hi guy. This is working amazing, but I got a problem and maybe you can help me. When using a prompt too long it will instantly fail to create the output folder for the images and end the process. Some of the most beautiful results require a lot of words in the prompt. Is there any way to change how the output folders work so no folder is created with the prompt name (or a shorter version)?
@diosaangelatop
@diosaangelatop 2 года назад
So, I discovered how to fix it. Basically windows 10 had an update that allows to change that restriction in naming folders, I quickly found a guide in google to eliminate that restriction. Thanks for your amazing work.
@fightingdreamers77
@fightingdreamers77 2 года назад
Is there a way you can show how to use masking and inpainting? All the images I use img2img end up looking...off
@idmonyildiz1581
@idmonyildiz1581 2 года назад
Low ram version still exceeds the VRAM and outputs a "RuntimeError: CUDA out of memory." A fix would be greatly appreciated 🙏
@androsforever500
@androsforever500 2 года назад
Where did you originally get the Easy Lstien program from? It works great! How do I update it?
@NeonXXP
@NeonXXP 2 года назад
InPainting next please! thank you :)
@emmajanemackinnonlee
@emmajanemackinnonlee 2 года назад
any fixes for this error on providing an initial image? UnicodeDecodeError: 'utf-8' codec can't decode byte 0x89 in position 0: invalid start byte im running on mac m1
@morizanova
@morizanova 2 года назад
Thanks for all links and video . I`m happy with Google Collab version . I wonder about Diffuser Experiment part , is that for enhancing generated results? Thanks
@int8float64
@int8float64 2 года назад
Also note for some users when installing all the packages it gets struck at the "installing pip dependencies" for a long time. so I had to manually install each of them. Is there any way to ease them?
@hoaxygen
@hoaxygen 2 года назад
I had to install the previous one and then update with the new files
@RandomClips22023
@RandomClips22023 2 года назад
I get a problem called "No module named 'imwatermark'". I installed it with pip, but still doesn't work.
@androsforever500
@androsforever500 2 года назад
TingTingin, I don't know if this is a bug but often for example if I write as a prompt "blue eyes" and get a result with blue eyes, then make a different prompt without specifying eye color it will keep making people with blue eyes when I would prefer it to be random. I noticed this for many type of prompts that it will keep including things from previous prompts, how do I make it not remember past prompts when making new images?
@androsforever500
@androsforever500 2 года назад
OK, I found out the solution and it's to put quotation marks around the prompt like this " xyz" or it might be that I set --strength to 0.1
@evanheuermann3588
@evanheuermann3588 2 года назад
Hello, I'm following your video and getting to the "conda env create -f environment.yaml"step and receiving this error: Collecting package metadata (repodata.json): done Solving environment: failed ResolvePackageNotFound: - torchvision=0.12.0 - pytorch=1.11.0 - cudatoolkit=11.3 tried removing everything and going through the process fresh 3 times now and same error every time
@M4DN3355
@M4DN3355 2 года назад
can we use our own portrait or photograph as image to modify or make it stylized
@lntcmusik
@lntcmusik 2 года назад
So in the low ram version I still need to use the old commands like "--prompt" and all that?
@ksanag3426
@ksanag3426 2 года назад
Thank you for tutuorial, it's great! I tested LowRam with prompt "cat" and my PC freezes on line "Loading model from model 1.3.ckpt". How many computer memory I need to run this? I only have 8gb.
@aidengreen343
@aidengreen343 2 года назад
hi i tried this today and keep getting the error that says no nvidia gpu detected no nvidia drivers detected. is there anyway to fix this? ive run the tutorial several times
@aidengreen343
@aidengreen343 2 года назад
Idk maybe the pyporch shaders thing I changed didn't download or something
@yungacid1
@yungacid1 2 года назад
I having issue with the gpu memory allocation, even though I switch the resolutions in the text file. what do i do?
@Manoel_ketchup
@Manoel_ketchup 2 года назад
pls help i type "cd D:\stablediffusion\stable-diffusion-main" but it doesnt get on the right directory, it keeps saying "C:\Users\Kaua"
@driesmichiels4184
@driesmichiels4184 2 года назад
Hey, is there a way I can see which seeds that have been used? I use the lowram version img2img and it always starts with global seed 42 but when I do multiple outputs I expect it uses different seeds but it doesn't seem to show them in the prompt.
@tingtingin
@tingtingin 2 года назад
easiest way is to specify seed yourself with --seed
@xSawdustx
@xSawdustx 2 года назад
So was any GPU, any Nvidia GPU? or are we talking AMD GPUs aswell? I'm getting a runtime error no Nvidia driver found. I have a big sad. Regardless of my issue, you make a pleasant tech walkthrough.
@hoaxygen
@hoaxygen 2 года назад
It has to be an Nvidia GPU because the dependency pytorch uses CUDA, which is proprietary
@xSawdustx
@xSawdustx 2 года назад
@@hoaxygen Thanks for the reply man. That's a huge bummer, I'll have to be patient then.
@dictater_tots
@dictater_tots 2 года назад
Hey Ting, just wanted to see if you were interested in getting involved with the latent space walk animations that can be made using Stable Diffusion. Andrej Karpathy apparently has a notebook on it on HuggingFace. I haven't looked into it yet, but most of this is over my head and your tutorials have been very helpful!
@vfxtricio
@vfxtricio 2 года назад
Is there any way to get GFPGAN integrated into SD?
@MrEtnavyguy
@MrEtnavyguy 2 года назад
This is my top question.
@mcvalen86
@mcvalen86 2 года назад
Excelent video, all works for me. Can you make a tutorial from creating animated videos with SD??? Thank you so much!
@MicroDweller
@MicroDweller 2 года назад
I keep getting "can't open file 'optimizedSD\optimized_txt2img.py': [Errno 2] No such file or directory" what am I doing wrong ?
@0Piedro0
@0Piedro0 2 года назад
it doesn't seem to be working on Radeon GPUs i have a Radeon RX 570 and I get this Error: "RuntimeError: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from [link]"
@slashkeyAI
@slashkeyAI 2 года назад
You forgot to mention that you have to delete src from the stable-diffusion-main folder after removing ldm with the conda env remove command.
@lastyhopper2792
@lastyhopper2792 Год назад
is it after this step? 6:48
@carterknudsen525
@carterknudsen525 2 года назад
I followed your previous instructional video for running SD locally and had great success. Today I followed your instructions here but am now having the same problem as many others with only a green square being the output. I can't seem to get the precision command to work with the updated prompt system. Is there a workaround for this? Thank you!
@tingtingin
@tingtingin 2 года назад
Are you using the high ram one?
@tingtingin
@tingtingin 2 года назад
If so I believe most of the gpus that have this green error need to use the low ram one
@carterknudsen525
@carterknudsen525 2 года назад
@@tingtingin thank you! Yeah, it seems I'd run the high ram one this time by mistake.
@PleaseOpenSourceAI
@PleaseOpenSourceAI 2 года назад
Thank you! I got it working from your previous video, but it tops up my 12Gb on high_ram on default resolution - had to go --W 384 --H 512 and it's almost tops it, but not enough to stop.
@tingtingin
@tingtingin 2 года назад
Did you download the new files in this vid? The base SD direct from the SD devs has samples set to one at the default which is too much for most systems you can fix by using --n_samples 1 however the new script doesn't have this issue and should work out the box
@PleaseOpenSourceAI
@PleaseOpenSourceAI 2 года назад
@@tingtingin just tried it 👍- yep it's a bit better now, got up to 904*512 and the first run allowed for 1024*512 somehow
@ilikewater666
@ilikewater666 2 года назад
Can you please advise on how we get the 6 image grid back by default? I much prefer the defaults on the last video and now this only makes one image at a time.
@tingtingin
@tingtingin 2 года назад
Send -g
@androsforever500
@androsforever500 2 года назад
How do I use gfpgan with the high ram bat?
@clashingsouls9352
@clashingsouls9352 2 года назад
Has anyone successfully integrated the GFPGAN dependency for direct usage? Getting Errors :\
@MrEtnavyguy
@MrEtnavyguy 2 года назад
I can't get it to work either. Hoping someone knows how.
@jackieclan815
@jackieclan815 2 года назад
You know if Img2img doesn't work. Remember that your picture name must be one word and have dot whatever file type it is. Example: Icecream.png or .jpg
@Mnemesia
@Mnemesia 2 года назад
Thanks for your tutorial, but there is something I don't understand: How can I use full precision with SD HighRam ? Adding "-precision full" to the end of my prompt isn't recognized as an existing argument.
@tingtingin
@tingtingin 2 года назад
It's 2 dashes
@BensMiniToons
@BensMiniToons 2 года назад
Any clue how the find the seeds after a batch sample?
@aybukecakmak813
@aybukecakmak813 Год назад
txt2img works but img2img doesnt generate, haw can i fix it?
@MarinaArtDesign
@MarinaArtDesign 2 года назад
NVIDIA GeForce GTX 750 Tï Is it going to work on this? The free stable? 18Gb Dedicated 2 Shared 16GB ? Thank you. PS. You also mention to delete everything id there is an error. I can uninstall Anaconda easily, but how do you delete all the other stuff that got downloaded and installed? Where do you find that? I am trying to clean PC with AMD.
@NeonXXP
@NeonXXP 2 года назад
According to Google the 750Ti only has 2GB of GPU RAM, 8GB of GPU RAM is advised. You might get something using the low RAM version and setting low image dimensions.
@MarinaArtDesign
@MarinaArtDesign 2 года назад
@@NeonXXP Adapter Information Chip Type: NVIDIA GeForce GTX 750 71 DAC Type: Integrated RAMDAC Adapter String: NVIDIA GeForce GTX 750 Ti Bios Information: Version82.7.32.0.5c Total Available Graphics Memory: 18408 MB
@fcolecumberri
@fcolecumberri 2 года назад
about all the changes you talk about at 1:30 wouldn't it be easier to just put the stuff on a github repo? so people can watch the project and get notified when changes are made?
@tingtingin
@tingtingin 2 года назад
It's on github
@gregs2649
@gregs2649 2 года назад
Where can you change number of iterations from default 37 ? When loading -I ?
@gregs2649
@gregs2649 2 года назад
When Type -s150 it does s112 when inputting 512x1024 file
@visualdestination
@visualdestination 2 года назад
Thanks
@Malukito17
@Malukito17 2 года назад
does it work with diffusion-webui?
@hsantonin65
@hsantonin65 2 года назад
How can we do inpainting ?
@ATOMICJUNKY
@ATOMICJUNKY 2 года назад
hey man awesome video is there a possibility for a video on stable diffusion animator to make videos
@PleaseOpenSourceAI
@PleaseOpenSourceAI 2 года назад
Do you know if there's a way to store every step of a single image generation process as a separate image?
@tingtingin
@tingtingin 2 года назад
this is a feature that will be added to the dream.py script by the dev eventually
@coreyhughes1456
@coreyhughes1456 2 года назад
I have hlky's latest distribution installed and I noticed it includes "optimizedSD" (the version that uses less vram), but there doesn't appear to be any instructions regarding it. Do you happen to know if this is something that is automatically embedded? I still get memory errors if I raise the resolution any higher so I'm not sure if it's working or how to use it.
@hoaxygen
@hoaxygen 2 года назад
Yes, it's embedded via the .bat file, so you don't really have to touch optimizedSD once it's in place. Also, I'm not sure what resolution you want your images to be, but the modus is to generate them small with 512ish and then use another AI, a free image upscaler, to increase the resolution to 1024, 1920, etc
@flipation
@flipation 2 года назад
Great video, so much helpful. I'm running the LowRam .bat in a 4Gb 1050ti and a single sample takes just about 3 minutes with the same quality as the DreamStudio. Just two questions: it's a way to manually set the name of the output directory that doesn't use the given prompt text? because large prompt texts cannot be saved as a directory name. It's possible to get a txt file with all the info of the image generated (seed, prompt, etc.)?
@Dmitriy108V
@Dmitriy108V 2 года назад
Double that question. Same here with name error thing.
@flipation
@flipation 2 года назад
@@Dmitriy108V Hi, by now I always use --outdir and --from-file to manage prompts and directories. Son first question is resolved. But no way to save the log using optimized sd executables. I noticed that dereams.py have a function called write_log_message that allows it, but it's no implemented in optimized_img2img.py nor optimized_txt2img.py.
@Dmitriy108V
@Dmitriy108V 2 года назад
@@flipation that has to be with windows file name lenght limitation or something. I just solved it completely by just moving SD folder from c/users/username/stablediffusion to d/stablediffusion. It must not be in users subfolder.
@WiliGameplay
@WiliGameplay 2 года назад
Hi, i just discovered Stable Diffusion so i am a total noob. I also have a 1050ti and Will download the lowram version, i want to know if this version is capable of img2img
@lastyhopper2792
@lastyhopper2792 Год назад
@@Dmitriy108V can you just move the whole folder without breaking anything?
@manindamoon5746
@manindamoon5746 2 года назад
Hi! I've followed all the steps and it's working BUT I have this error "RuntimeError: CUDA driver initialization failed, you might not have a CUDA gpu". I have a Nvidia Quado K5100M (8 gb of VRAM) and I'm able to run other programs using that GPU. Does anybody knows what's going on? I'm so frustrated :(
@UberzOmbi
@UberzOmbi 2 года назад
Hi! I can`t find activate.bat in scripts folder
@rafalyp73
@rafalyp73 2 года назад
this is happening to me too
@PleaseOpenSourceAI
@PleaseOpenSourceAI 2 года назад
7:18 typo in "environment"
@larsmanstanding
@larsmanstanding 2 года назад
i get this error: "RuntimeError: [enforce fail at C:\cb\pytorch_1000000000000\work\c10\core\impl\alloc_cpu.cpp:81] data. DefaultCPUAllocator: not enough memory: you tried to allocate 14745600 bytes." in the low ram version with the command: " --W 128 --H 128 --n_samples 1 --prompt "dog photo" " width and height are multiples of 64 and i have 6gb of GPU memory (GTX 980TI) so it should work right? i dont know why i get this message. i spent hours to get to this point and now its not working in the last step :(. would be glad to get a tip from you because i dont have a clue why its not working. PS: i Got some errors earlier in the installation process but this video helped with everything else. Thanks for your awesome Help and Tutorials. also i really like your content style, its verry unique and quite interesting/entertaining to watch. keep it up im sure youll get a lot of viewers in the future!
Далее
We Fell For The Oldest Lie On The Internet
13:08
Просмотров 2,6 млн
Самое большое защитное стекло
00:43
The Most Impressive Scratch Projects
11:00
Просмотров 5 млн
If I Laugh, I Install Malware (Ep. 2)
15:07
Просмотров 1,4 млн
The Unreasonable Effectiveness of Linux Workstations
12:47
Skip one block gaps in Minecraft.
9:16
Просмотров 278 тыс.
Why Are Open Source Alternatives So Bad?
13:06
Просмотров 662 тыс.
Stable Diffusion: DALL-E 2 For Free, For Everyone!
10:59
AI Learns to Play Tag (and breaks the game)
10:29
Просмотров 4,1 млн
Самое большое защитное стекло
00:43