Тёмный

SDXL Local LORA Training Guide: Unlimited AI Images of Yourself 

All Your Tech AI
Подписаться 22 тыс.
Просмотров 98 тыс.
50% 1

Опубликовано:

 

10 сен 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 296   
@allyourtechai
@allyourtechai 7 месяцев назад
✨ Support my work on Patreon: www.patreon.com/allyourtech 💻My Stable Diffusion PC: kit.co/AllYourTech/stable-diffusion-build
@frankschannel2642
@frankschannel2642 7 месяцев назад
One of the better YT videos on the topic. Liked and subbed.
@allyourtechai
@allyourtechai 7 месяцев назад
I appreciate that!
@insurancecasino5790
@insurancecasino5790 6 месяцев назад
Word. Literally walks you through exactly how it will work with what PC you may have.
@antoinedacremont3517
@antoinedacremont3517 5 месяцев назад
Pleaaaaaaaaase use dark mode it's so bright i'm starting to be blind :( For real, so hard to follow with white.
@chasingdaydreams2788
@chasingdaydreams2788 4 месяца назад
Sure dark mode is better than white mode but complaining about white mode making the video almosy unwatchable is just stupid
@ElGalloUltimo
@ElGalloUltimo 6 месяцев назад
Great tutorial. I will note that you seemed to have skipped a very important step: choosing the Source model under LoRA > Training
@allyourtechai
@allyourtechai 6 месяцев назад
Great catch, and definitely important to call out!
@mariano_molina
@mariano_molina 6 месяцев назад
Amazing comment. Literally scouring the web to find what I am doing wrong and that is the key question I have, since I am doing what I guess is correct but the lora samples come out completely cooked.
@davidcotto8955
@davidcotto8955 5 месяцев назад
yeah im lost here too. This is my first training and training video im following. i have the safetensor files that were generated by the Lora part, i have no clue what to do there.. am i suppose to put those safetensors files under the Lora folder?
@Villa-Games
@Villa-Games 7 месяцев назад
It's hard to believe that I used to delete other Loras without thoroughly checking them. And bitch around about having a poor outcome! Thanks buddy, quite helpful
@allyourtechai
@allyourtechai 7 месяцев назад
I’m right there with you. I did the exact same thing before I figured out how to objectively test each file
@TINTO_BRO
@TINTO_BRO 7 месяцев назад
@@allyourtechai what parameters do I need to change to make the workout go faster? now 3% takes more than an hour on my 3080
@espen990
@espen990 7 месяцев назад
what do you mean, thoroughly checking them? Like the trigger words?
@MyAmazingUsername
@MyAmazingUsername 7 месяцев назад
​@@espen990Watch the video.
@zhenyiyang6521
@zhenyiyang6521 6 месяцев назад
Hi everyone, there is an error when I run the setup.bat file in the kohya_ss directory ( ModuleNotFoundError: No module named 'pkg_resources' ). I have already installed Python 3.10 and visual studio 2015. Any good solutions for it? thank you very much!
@noe2267
@noe2267 6 месяцев назад
same here
@disposable3167
@disposable3167 3 месяца назад
make sure you select the option to add python to the 'PATH' environment variable while installing python
@WNTrombone
@WNTrombone 2 месяца назад
@@disposable3167 THANK YOU!!! I had the same issue and this fixed it. I had to manually add it to the PATH in my system settings since I already had it installed but it worked!
@vatoko
@vatoko 7 месяцев назад
Thank you very much! Very useful. Nvidia 3060 does 10 epochs in 10 hours with Network Rank 32 and Network Alpha 16 and 25 - 42 1024x1024 images in the dataset. It's a long time, but it manages.
@allyourtechai
@allyourtechai 7 месяцев назад
That's not bad actually!
@fi5h81
@fi5h81 7 месяцев назад
you can speed up that time to 3hours
@TINTO_BRO
@TINTO_BRO 7 месяцев назад
@@fi5h81 how?
@casp3r177
@casp3r177 7 месяцев назад
@@TINTO_BRO Just buying couple of 4090 :D
@tuna1867
@tuna1867 6 месяцев назад
@@fi5h81 how?
@WNTrombone
@WNTrombone 2 месяца назад
Putting this out there in case anyone has the same issue that I had and needs a solution. I had python 3.12 installed prior to downloading kohya, then when I was running the setup.bat and it did the "creating venv" thing, it got mad at me that the version of python was not 3.10. So I installed Python 3.10 and made sure to update the PATH in my environment settings but when I tried to run the setup again, it still got mad that Python was incompatible even when running the setup for multiple versions of python being installed... So then I removed Python 3.12 completely, ran it again, and it still got mad saying that no python was found at the python 312 location... Ultimately out of rage, I deleted the kohya folder completely, recloned the repository, ran the setup.bat and it worked! I don't know the technical reason behind what happened but my guess is that something during the initial time that it tried to run remembered the python 3.12 file location and it took a complete restart of the install process to make it forget it and use the python 3.10 location. Currently still installing the files and dependencies but it seems to be working now :)
@BrettHarnish
@BrettHarnish 3 месяца назад
Sees 8 celebrities not named Tom Cruise: "I'm gonna type Tom Cruise" 🤣
@OM3N1R
@OM3N1R 6 месяцев назад
Hugely helpful! Thank you so much!
@ImAlecPonce
@ImAlecPonce 8 месяцев назад
THANKS!!!! no one else hade ever mentioned the regularization files!!!!! It's finally working... but on my 4060.... 1hr =3% hahahahaha
@ImAlecPonce
@ImAlecPonce 8 месяцев назад
per EPOC!!!! I'm going to lower the parameters
@allyourtechai
@allyourtechai 8 месяцев назад
Haha! Training can definitely be slow
@ImAlecPonce
@ImAlecPonce 8 месяцев назад
I made a booboo.... I thought it meant 35 hours each epoc but it was in total XD (at 156 Network rank it says it will take about 2 hrs.)
@anovin82
@anovin82 8 месяцев назад
Can't even get this to start training... error message "image folder does not exist" even though I've selected the source image directory.... Also the new version doesn't have "No half VAE." Would be nice if you had gone more step by step. Will have to search for another video tutorial on this subject.
@allyourtechai
@allyourtechai 8 месяцев назад
Make sure you have the SDXL model selected and you should see “no half vae” as an option
@anovin82
@anovin82 8 месяцев назад
Thanks for your response, can you tell me where you select the SDXL model in Kohya and which SDXL file you download from Stability's Git repository? @@allyourtechai
@allyourtechai
@allyourtechai 8 месяцев назад
@@anovin82 On the main LoRA tab there is the training tab. You will want to select stable diffusion xl base 1.0 for the source model. It should be installed by default but if it isn't, here is the file from the repo: huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/blob/main/sd_xl_base_1.0_0.9vae.safetensors
@StargateMax
@StargateMax 6 месяцев назад
I'm having the same issue of "image folder does not exist" and I'm giving up, can't find a workaround.
@escobolpl
@escobolpl 6 месяцев назад
in Dataset Preparation click "Prepare training data"@@StargateMax
@PaulRoneClarke
@PaulRoneClarke 2 месяца назад
I do have to disagree on the using a similar name. I've done this a few times. Followed the exact steps you do, and gone to the website, found the famous person who looks most like me, used that in the right place - and then every second image comes out as clearly that person and not me, the others a mashup of me and that person which still don't too much like me. (Matt Damon in my case). When I changed the description to a random set of letters and numbers the results were much better. The "infection" that using that name had on the results was way too high in my case. But other than that everything worked really well, so overall I appreciate the video. Thumbs up :)
@allyourtechai
@allyourtechai 2 месяца назад
It does seem like it can lead to mixed results. Mine have been good, but I’m also leaning back toward unique tokens.
@freezEware
@freezEware 3 месяца назад
I wanted to give up already so many times but oh gosh danged now it works. Lol
@allyourtechai
@allyourtechai 3 месяца назад
Hey! That’s great
@liorash7
@liorash7 7 месяцев назад
For some reason, when choosing the sd xl 1.0 base checkpoint, I can't read from the UI the Lora files (which they're .safetensors file). With the other default checkpoint of webui, I could load them. Is this something regarding incompatibility with sd xl 1.0 base and Lora's files versions or something?
@LaguGabut
@LaguGabut 7 месяцев назад
I Use phyton 3.11 Kohya SS not compatible with it... Is there any chance that Kohya can run on phyton 3.11?
@allyourtechai
@allyourtechai 7 месяцев назад
I don't think it is compatible. In fact many stable diffusion systems seem to have compatibility issues with 3.11. I always suggest running 3.10 if you can.
@DarthVaDer69
@DarthVaDer69 5 месяцев назад
Hi, The Dataset preparation tab is missing for me, I can only see until Verify LoRA tab
@robertlestercreative
@robertlestercreative 5 месяцев назад
I had this issue and was searching here for answers but I just finally stumbled on it in the Dreambooth training tab then saw your question. I found it in one of the horiztonal drop downs under "parameters"
@vascocerqueira
@vascocerqueira 4 месяца назад
same for me. I found it here: Lora - Training - Dataset Preparation.
@AdamIverson
@AdamIverson 8 месяцев назад
Great tutorial. For me personally, I just completely skip the blip caption, seem unnecessary. I also skip using celebrity lookalike name and just straight up use whatever name I want. For example, I use the name "Adam Iverson" as myself for the trigger word when I trained my own model, I don't bother use the class in prompt like "Adam Iverson man" or stuff like that. I generally use somewhere between 5-10 repeats and then use 10 epoch, the whole training process takes around 30 minutes to 1 hours with my RTX 3090 using the GPU batch size of 6. You can see my result in my profile pictures that I'm using currently.
@allyourtechai
@allyourtechai 8 месяцев назад
Looks like a solid result! The training time is impressive I may have to give that a shot. The interesting thing I have found is that most of these settings have very minor impacts on the overall output quality.
@RTCDigitalS
@RTCDigitalS 7 месяцев назад
Awesome result. You should do a tutorial
@Majestic_King_Hunter
@Majestic_King_Hunter 7 месяцев назад
I also would like a tutorial cause I cant get my 3090 to process within an hour. I am using 42 photos though.
@allyourtechai
@allyourtechai 7 месяцев назад
@@Majestic_King_Hunter With a 3090 and 42 images I would expect about 10 hours using the default settings. That's going to give you 10 LoRA files though with which you can test. You can modify the epoc's and number of steps for faster model generation at the expense of some quality,
@markirwin3624
@markirwin3624 5 месяцев назад
So let's say I only have 6 gigs of vram. Would it be possible to train my own Lora?
@zaionDoe
@zaionDoe 5 месяцев назад
Yeah, i did it with 4gb RTX 3050Ti... it just took a much more longer time, almost 40hours. But then again you can lower your training settings for achieving faster times
@markirwin3624
@markirwin3624 5 месяцев назад
@@zaionDoe Good to know. I think it will be a while before I make my own lora. 2 days is too long lol.
@SoundGuy
@SoundGuy 3 месяца назад
I have a 14GB memory vram what parametes shouold i change to be able to train a SDXL for less memory
@andrewarc9226
@andrewarc9226 5 месяцев назад
i did everything accordingly and whenever i get to BLIP captioning, select directory and name prefix, after pressing captions it doesn't do anything. can yall help me with this?
@amorgan5844
@amorgan5844 4 месяца назад
3:00 i know this is a bit older video but the only time ive noticed changing the learning rate higher is if i move the batch higher. Kohya recommends only to adjust learning rate for every batch you raise, double the learning rate. Batch 1 LR .0001 batch 2 LR .0002 batch 3 .0004. Its the same as training dfl models. I haven't tested any higher than batch 3 but it does help with getting more details
@xitement7504
@xitement7504 4 месяца назад
can you explain this for someone who just started? i'm trying to improve my lora training.
@adriandixon2787
@adriandixon2787 7 месяцев назад
Definitely one of the better tutorials than most out there for sure, thank you! Would the process be the same for training a SD 1.5 LORA? If not then do you have another video/guide I can follow? Thank you again in advance!
@allyourtechai
@allyourtechai 7 месяцев назад
Similar, but the settings need tweaking. I’ll do another guide. I should have a video up in a few hours that shows how to train an SDXL Lora on colab for free as well.
@ronwesterduin2471
@ronwesterduin2471 2 месяца назад
Using the latest version of Kohya everything is in a different place, renamed or just removed, i couldnt get it to work with this tutorial, sorry.
@RSV9
@RSV9 5 месяцев назад
Thank you very much, a great video. Finally a more detailed description. I'm going to try it even though I only have a 3050ti/4GB vram, but it seems that the computer also uses normal memory to complement the lack of vram, although it takes longer that way.
@freneticfilms7220
@freneticfilms7220 7 месяцев назад
If the X/Y/Z sheet resemble the epochs you used and you always go for the once more to left side (so epoch 3 to 5), why do you even set the epochs that high in the first place? Why didnt you go for 6 epochs for example? Is there any quality loss if you go for lower epochs for the results on the left side? (not sure if i made myself clear)
@allyourtechai
@allyourtechai 7 месяцев назад
It really comes down to your specific use case. You might use one of the higher epocs if you need a very high quality result that may not be very flexible. If you don't need that, you can get away with stopping the training after the first few epocs.
@freneticfilms7220
@freneticfilms7220 7 месяцев назад
You are saying once you use Buckets, the resolutions dont matter and also not if they are not cropped perfectly. Fine, but doesnt it say something about "max bucket resolution" in Kohya? And will i benefit in any way from a highe Res image, that i leave on a higher resolution?
@Poppinthepagne
@Poppinthepagne 14 дней назад
Can you train with comfyui? At this point they all the same foocusai comfy gui sd etc
@zaionDoe
@zaionDoe 7 месяцев назад
Thanks Brian! very helpful
@allyourtechai
@allyourtechai 7 месяцев назад
Awesome, glad it was helpful! Thanks for watching and dropping a comment
@mrLifeCity
@mrLifeCity 8 месяцев назад
Nothing worked for me and it gave me an error "No data found. Please verify arguments" until I clicked the "Prepare training data" button on the Dataset Preparation tab
@allyourtechai
@allyourtechai 8 месяцев назад
Thanks for posting the solution in case anyone else runs into the same issues
@mrLifeCity
@mrLifeCity 8 месяцев назад
@@allyourtechai Thank you. I Have another two questions for you: 1) Is it ok that the pack of images of men that you provide looks auto-generated? Or maybe I did something wrong? If I collect real photos, will this improve the Lora training process? Or it doesn't matter for training? 2) when I try to train LORA based on SDXL under Ubuntu 22.04 with my AMD Radeon 6800XT 16GB using your settings and 50 foto, I got an error: torch.cuda.OutOfMemoryError: HIP out of memory. Tried to allocate 2.20 GiB. GPU 0 has a total capacity of 15.98 GiB of which 1.72 GiB is free. Of the allocated memory 13.43 GiB is allocated by PyTorch, and 228.26 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_HIP_ALLOC_CONF I Finally found a solution to deal with this OutOfMemory issue, going through many settings that affect memory consumption. So if anyone encounters the same problems on a AMD RX 6800XT video card and on the Ubuntu operating system, I can post my JSON settings file or the command line that I used to train Lora
@allyourtechai
@allyourtechai 8 месяцев назад
1) it doesn't matter for training, it just guides the model, but doesn't impact quality much. 2) It sounds like the system is trying to use cuda even though you have an AMD card. Might be a startup or configuration option that needs to change.
@Fangornmmc
@Fangornmmc 3 месяца назад
Could you explain a bit more why you set the network rank (dimension) to 256? I know you said it results in better lighting, but I am interested to hear what this setting really does. The default is 8 and on a 3080TI I was looking at very long training times when I started with 256, so I am now trying to start with 8 and going up from there to see how it affects things.
@allyourtechai
@allyourtechai 3 месяца назад
It’s pretty complex under the hood, but basically network rank is the number of features to be trained and controls how they cross layers in the neural network, and network alpha controls how many of the features are applied to the LoRA. Generally higher numbers are better to a point and result in slower training and larger LoRA files. 128 is also a good starting point for characters from what I have seen. Someone here did an in depth test that you can also check out medium.com/@dreamsarereal/understanding-lora-training-part-1-learning-rate-schedulers-network-dimension-and-alpha-c88a8658beb7
@bluecheez555
@bluecheez555 4 месяца назад
In task manager, it seems my GPU isn''t running at all during training....what could I be doing wrong here?
@allyourtechai
@allyourtechai 4 месяца назад
When you configured kohya, it's possible you missed a setting? If you have an RTX card you want bf16, and fp16 for older cards. There are other choices as well to select the gpu in the system.
@bluecheez555
@bluecheez555 3 месяца назад
@@allyourtechai I have a 3060ti. Also, what setting could I miss when configuring it? When I run the software I get the following message in the CMD: 19:53:56-550019 INFO Kohya_ss GUI version: v24.1.2 ... 19:54:06-146443 INFO Torch backend: nVidia CUDA 11.8 cuDNN 8700 19:54:06-152433 INFO Torch detected GPU: NVIDIA GeForce RTX 3060 Ti VRAM 8191 Arch (8, 6) Cores 38 So it seems to recognise my GPU...but when I'm training it doesnt seem as though my GPU has any software appearing to take up its processing power in the task-manager. Any ideas?
@Sulkian
@Sulkian 5 месяцев назад
Hey buddy, I'm having a problem. I follow the entire tutorial, but Kohya's interface looks very different from what's shown in the tutorial, and it doesn't work well. I have the same issue with all tutorials. Maybe there was an update. Do you know anything about this?
@allyourtechai
@allyourtechai 5 месяцев назад
Kohya has been updating every couple days with new releases which has made it virtually impossible to keep up with. I’ll have another new video soon
@tierlistboss
@tierlistboss 5 месяцев назад
@@allyourtechai I'm really looking forward to this video, I can't get Kohya to work for me either.
@asdasd4sdasdasdasdasdas
@asdasd4sdasdasdasdasdas 5 месяцев назад
@@allyourtechai Looking forward to this video, seems impossible to get kohya installed at the moment. I've tried everything including docker images. Really frustrating software. Thanks for your videos
@maltar5210
@maltar5210 5 месяцев назад
same -____- and dark theme would be appreciated, my eyes are burning
@AndyHTu
@AndyHTu 4 месяца назад
What Class Prompt would a pixar like character be called? I am training to train a little 8 year old girl for a project. Do I write Girl as a Class Prompt, or do I write Cartoon Girl? Then for the regularization images, do I choose pictures of a little girl or pictures of a bunch of pixar girls?
@allyourtechai
@allyourtechai 4 месяца назад
You don't want something too specific, but it should be descriptive. I would try pixar girl since there are lots of reference images in the existing models for pixar characters.
@Im_JustKev
@Im_JustKev 3 месяца назад
I just installed this and I had to go put the Kohya local url in my browser to start it. It's those little things you just don't think of that other bats just auto run like with comfy, auto 1111, Fooocus and you expect them just do. I know your video was 4 months ago so I'm not sure if somethings changed or it's my install but I have to do this. You didn't show you needed to. My install would not let me start kohya in the browser with the #5 option. It kept making me go through the multiple options of the install. I actually was stumped for a bit on why it wasn't starting. I'm actually not sure if it shouldn't just actually be starting without me having to put the url in my browser???
@ZintomV1
@ZintomV1 2 месяца назад
That generation speed for an RTX card seems quite slow, do you have the right drivers/toolkits installed?
@allyourtechai
@allyourtechai 2 месяца назад
It all depends on your settings and number of iterations. You can generate a Lora very quickly, but the quality is going to be low
@nessdevelopment8779
@nessdevelopment8779 6 месяцев назад
Thanks for making this vid!
@allyourtechai
@allyourtechai 6 месяцев назад
You bet!
@nessdevelopment8779
@nessdevelopment8779 6 месяцев назад
@@allyourtechai I do have one followup question. I'm seeing a lot of the images seem to look closer to the celebrity I used in the class prompt for training the LoRA than they do myself. Things like eye color stand out especially. Do you ever use something aside from the celeb name to get it to more closely resemble yourself?
@allyourtechai
@allyourtechai 6 месяцев назад
@@nessdevelopment8779 in some cases where the celeb doesn't have many pictures in the base model, it can get off track a bit. The other thing I try to do is be specific in my text annotations when training the LoRA. tag the images with things like eye color, hair color, articles of clothing, etc. YOu can use those keywords later when creating images to get those finer details.
@sergetheijspartner2005
@sergetheijspartner2005 6 месяцев назад
Also I want to make a lora for a symbol which could be used in multiple jewelry, emblems or logo images, problem is that this symbol does not have many pics, but I do have a 2D black and white picture and I have fusion 360, could I make images of a 3D render in every angle and use that as training data? I think I just answered my own question, but hey you can use the idea for a video of you like
@dwainmorris7854
@dwainmorris7854 4 месяца назад
Yes, there are a lot of programs out there.That can do LORAS when it comes.To faced likenesses and body types.But when it comes to detailed superhero costumes.LORAS fall short, what if I want a particular costume on a likeness?Then what do you do
@Wurmhouse
@Wurmhouse 6 месяцев назад
hello everybody. I'm planning to switch from mac to windows. In the meantime can I use Mac to do this? do you know a tutorial for that? thanks a lot in advance
@arquinovatos
@arquinovatos 7 месяцев назад
Omg, I've spent an hour and a half following this tutorial, I even paid for the patreon subscription to download the files, and in the end I wasn't able to run the training, it just didn't start and the CMD black window just said it was an error with kohya thing... Is there any other way totrain a lora that doesn't take 10 hours just to make 10 epochs ?I've already tried One Trainer and it spent like 12 hours training 1200 epochs on a RTX 4090, but the results were still being disgusting when trying to make a full body portrait but amazing making close-ups. Has anyone experienced something similar?
@allyourtechai
@allyourtechai 7 месяцев назад
What was the error?
@arquinovatos
@arquinovatos 7 месяцев назад
@@allyourtechai it says "RuntimeWarning: invalid value encountered in scalar divide ret = ret.dtype.type(ret / rcount) mean ar error (without repeats): nan No data found. Please verify arguments (train_data_dir must be the parent of folders with images) / 画像がありません。引数指定を確認してください(train_data_dirには画像があるフォルダではなく、画像があるフォルダの親フォルダを指定する必要があります)"
@rememberme666
@rememberme666 6 месяцев назад
@uardocardona3994Same error here, also with an rtx 4090
@StargateMax
@StargateMax 6 месяцев назад
setup.bat does nothing. In CMD it just has no effect as if it's an empty file, but it's not empty. Clicking on it in Windows has no effect either. Should I give up? Edit: even though I have 4 other AI tools installed and working fine, I now tried to install Python again, and this time setup.bat worked. Very weird.
@allyourtechai
@allyourtechai 6 месяцев назад
odd one for sure, but glad you got it working.
@sinaarash1668
@sinaarash1668 7 месяцев назад
I don't have a gaming PC but I am very eager to create such images please help me. Is there a way to not use my computer's CPU
@allyourtechai
@allyourtechai 7 месяцев назад
I’ll put together a how to on using stable diffusion with a CPU!
@DrysimpleTon995
@DrysimpleTon995 3 месяца назад
im so confused at the installation part. When I push in "1" I just installed. It never asked me anything else. Now I don't even know if its using my GPU for training for not.
@allyourtechai
@allyourtechai 3 месяца назад
Did you have a previous version installed before?
@apa_channel_
@apa_channel_ 3 месяца назад
first time right?
@op12studio
@op12studio 2 месяца назад
If you haven't figured this out already look and see if an option says "Manually configure Accelerate" if so select it and it will ask you all the questions shown in this tutorial.
@timfarnham
@timfarnham 2 месяца назад
It is giving me an error that a new release of PIP is available, however I ran the code and it confirms I have the latest version, 24.0. Also, it tells me "ERROR: file:///D:/kohya_ss/sd-scripts (from -r requirements.txt (line 35)) does not appear to be a Python project: neither 'setup.py' nor 'pyproject.toml' found." So I am not sure what to do...
@timfarnham
@timfarnham 2 месяца назад
[notice] A new release of pip available: 22.3.1 -> 24.0 [notice] To update, run: python.exe -m pip install --upgrade pip 20:06:05-937510 INFO Requirements from requirements_pytorch_windows.txt installed. 20:06:05-938510 INFO Installing requirements from requirements_windows.txt... Obtaining file:///D:/kohya_ss/sd-scripts (from -r requirements.txt (line 35)) ERROR: file:///D:/kohya_ss/sd-scripts (from -r requirements.txt (line 35)) does not appear to be a Python project: neither 'setup.py' nor 'pyproject.toml' found. [notice] A new release of pip available: 22.3.1 -> 24.0 [notice] To update, run: python.exe -m pip install --upgrade pip 20:06:06-379616 INFO Requirements from requirements_windows.txt installed. 'accelerate' is not recognized as an internal or external command, operable program or batch file.
@timfarnham
@timfarnham 2 месяца назад
I even became a member of your patreon hoping this would help!
@Mukozuke
@Mukozuke 7 месяцев назад
This is a great tutorial and I got fantastic results on a first try! I have one question though - I've noticed that you are training on base SDXL model. Is there any particular reason for that? There are some decent looking SDXL models like Juggernaut or RealVis so I wonder what motivated your choice.
@allyourtechai
@allyourtechai 7 месяцев назад
Just keeping things simple for the tutorial, but you bring up a fantastic point. There are some XL models that are far better than base, and I definitely encourage people to experiment with them. I’ll probably do a follow up with some of my favorites
@equilibrium964
@equilibrium964 7 месяцев назад
You can for example train on juggernaut if you really like the model, but keep in mind that you could run into issues if you want to use the lora with other models. That's why I usually train on the base model, the results are more flexible.
@Airbender131090
@Airbender131090 7 месяцев назад
realvix xl is amazing as a base for training
@MyAmazingUsername
@MyAmazingUsername 7 месяцев назад
​@@allyourtechaiJuggernaut is practically the standard now. It's sponsored by some AI company and gave the best face similarity result metrics for IPAdapter tests. I'd say it's the best refined model.
@Pauluz_The_Web_Gnome
@Pauluz_The_Web_Gnome 7 месяцев назад
Hi, You forgot to tell that you have to choose a source model @ Lora/Training/Source Model...
@allyourtechai
@allyourtechai 7 месяцев назад
I noticed that and have added it to the description in case anyone else misses that piece. Thanks for calling it out!
@Fanaz10
@Fanaz10 4 месяца назад
whats the difference between autotraining and this one? is this more flexible?
@roninswilduniverseofdota2701
@roninswilduniverseofdota2701 7 месяцев назад
what happens if you use different but similar looking people?
@allyourtechai
@allyourtechai 7 месяцев назад
I would consider that more of a style. Models, or Norwegian women, etc.
@tinnsoldaten
@tinnsoldaten 7 месяцев назад
Intel i9 13900k with rtx 3080 , training 15 high res images, used your guide and got going spent little less than 24 hours on first stage of this training. now at second stage in training, second last line in cmd says epoch 1/10, last line steps 0% 4/3000 [10:41:44
@allyourtechai
@allyourtechai 7 месяцев назад
When you pull up windows performance manager (control, alt, delete), how does the memory usage on your card look? My hunch is you don’t have enough vram for those settings. These are most important settings to lower VRAM memory for SDXL training: * Choose either Adafactor or AdamW8Bit as optmizer * Train batch size 1 * Memory efficient attention checked * Gradient checkpoint checked * Max resolution: 1024,1024 * Enabled buckets checked, Minimum bucket resolution: 64 , maximum bucket resolution: 1024 * Don't upscale bucket resolution unchecked, Bucket resolution steps 64 * No sample preview or limited sample previews 768x768 max * xformers or sdpa
@tinnsoldaten
@tinnsoldaten 7 месяцев назад
Did the adjustments, and lowered also number of photos used untill I get the settings right. Still I dont see any spikes in GPU load when I start over. @@allyourtechai
@allyourtechai
@allyourtechai 7 месяцев назад
Do you see CPU usage? I'm wondering if koya isn't configured with CUDA by chance?
@tinnsoldaten
@tinnsoldaten 7 месяцев назад
Not much, started over with just 5 photos yesterday night , now its done with first part. I see just low % on both GPU and CPU. I could send you the screenshots of cmd text of running process @@allyourtechai
@squiddymute
@squiddymute 7 месяцев назад
do we need only close up shots for the inital set of images?
@allyourtechai
@allyourtechai 7 месяцев назад
A variety of shots works best
@venom90210
@venom90210 5 месяцев назад
my model came out as a JSON file what did i do wrong?
@lahiruwijesinghe1146
@lahiruwijesinghe1146 6 месяцев назад
Warning: Python version 3.10.9 is required. Kohya_ss GUI will most likely fail to run. "Installing packaging python module..." Requirement already satisfied: packaging in d:\ai\lora training\kohya_ss\venv\lib\site-packages (24.0) File "D:\Ai\Lora Training\kohya_ss\setup\setup_windows.py", line 7, in import setup_common File "D:\Ai\Lora Training\kohya_ss\setup\setup_common.py", line 11, in import pkg_resources ModuleNotFoundError: No module named 'pkg_resources' i am getting above error. how do i fix it? I am on windows and i have installed python 3.12, my gpu is rtx 2060 mobile.
@glendaion-vk6pf
@glendaion-vk6pf 7 месяцев назад
how many VRAM do you need to train Lora SDXL? I have a 3080 GPU.
@allyourtechai
@allyourtechai 7 месяцев назад
A 3080 can do it. 12GB is about the minimum you can get away with for SDXL unless you drop the quality significantly. You should be in good shape!
@baheth3elmy16
@baheth3elmy16 6 месяцев назад
The background screen is extremely small, nothing shows on it. You are using an old Kohya distribution, the new one does not have the same tabs and options.
@allyourtechai
@allyourtechai 6 месяцев назад
It was the latest build at the time of recording
@zaionDoe
@zaionDoe 5 месяцев назад
Check the date the video was posted
@Tyback8
@Tyback8 7 месяцев назад
Hey! At 5:50, I can't click on the files icons (it's grey), would you know why and how to fix it?
@allyourtechai
@allyourtechai 7 месяцев назад
I haven’t seen that before, but did find someone with a similar issue www.reddit.com/r/StableDiffusion/s/LfClJCVrMQ
@Tyback8
@Tyback8 7 месяцев назад
@@allyourtechai Thanks for your answer! I somehow managed to fix the pb, but I have an other.. at 13:00, after I followed all your instructions, when I click on "Start training" I have 2 error messages : "python stopped working" and then "Error. Connection errored out". Would you know how to fix it? I tried putting epoch on 1, resolution on 512,512, network rank on 36, and still the same error
@itscout594
@itscout594 6 месяцев назад
Bought your patreon, where do I find the folder of women photos as advertised in the video? I can't seem to locate it.
@joeschell3366
@joeschell3366 5 месяцев назад
I was able to successfully complete everything in the guide here but the images I’m getting do not really look like the source images I provided. Any tips/insights would be appreciated
@jony2857
@jony2857 5 месяцев назад
You have to really play around with the Lora's sliders of weight/strength, & figure out witch Lora looks the best like he did at 15:59. Even at the end of this tutorial his final image doesn't really look like him. Personally I've found that img to img works the best with Loras when trying to create look a like photos
@ElGalloUltimo
@ElGalloUltimo 6 месяцев назад
Reporting back on this, I think you may have missed some more critical steps. I just trained a model for 4 hours and when I put it in the Automatic1111 lora folder, it doesn't show up. When I force it to always show networks in the settings, the lora files show up but literally do nothing when added to the prompt.
@allyourtechai
@allyourtechai 6 месяцев назад
I have created several working loras with this as have others. I don’t think that’s the issue
@opensourceradionics
@opensourceradionics 4 месяца назад
Just 3 months past and the tab "Folder preparation" doesn't exist anymore.
@ICHRISTER1
@ICHRISTER1 4 месяца назад
Thanks for the video! I followed up with all the steps, now waiting to see the results! Is there anything that has changed since the video was released? should I enable more options on a 3090? I currently at 19Gb Vram, some are spare.
@MyAmazingUsername
@MyAmazingUsername 7 месяцев назад
If you almost always use lora epoch 3 or 4, why do you train for 10 epochs (20 hours)? Thanks for a great tutorial!
@allyourtechai
@allyourtechai 7 месяцев назад
Great question. I should have clarified that further. For me specifically I train the LoRA on images of myself for RU-vid thumbnails, and with 10-20 images epoch 3 or 4 works great. If I were training a style or training with 100 images, I may want to use a different epoch. You can definitely save time by cutting the training in most cases, but depending on what you train you might use a later epoch (hopefully that all made sense)
@samuelmxwl98
@samuelmxwl98 7 месяцев назад
I got an error after running the GUI Bash FIle: "ImportError: cannot import name 'set_documentation_group' from 'gradio_client.documentation' (C:\StableDiffusion\kohya_ss\venv\lib\site-packages\gradio_client\documentation.py)" Don't know that to do. I have an rtx 4090 GPU
@theschnilser7962
@theschnilser7962 7 месяцев назад
Same. Drove me crazy... Running "pip install gradio_client==0.8.1" in cmd (without"") then starting gui.bat worked for me, hope it works for you, too.
@mrlkn
@mrlkn 7 месяцев назад
You just need to downgrade your gradio_client to 0.8.1 with pip install gradio_client==0.8.1 open your environment with activate.ps1
@samuelmxwl98
@samuelmxwl98 7 месяцев назад
@@mrlkn unfortunately that doesn't work for me
@iiJDSii
@iiJDSii 7 месяцев назад
Amazing guide, subscribed! Thank you. I was wondering if you have a related tutorial about inserting a person's LoRA into photorealistic scenes, like on a boat, mountain climbing, at a party, doing other cool stuff, etc. That would be really neat, could be an add-on to this! Cheers
@allyourtechai
@allyourtechai 7 месяцев назад
Great idea. I’ve been posting prompts regularly to my patreon for people to play around with. I can do a follow up video as well
@deviant1911
@deviant1911 8 месяцев назад
Following your tutorial I was able to generate 10 lora files, but each of them is not working when I try to generate images. Below the generated image there is a message "Networks not found: 1" All the other Loras I use work fine. I searched online but couldn't find this error anywhere, apart from a reddit post with the same issue from 4 months ago with no answers. I would appreciate if you could help. I run an RTX2080ti if that matters and I used fp16
@allyourtechai
@allyourtechai 8 месяцев назад
I have not seen that before, but searching around as well. Are you using automatic1111? Have you tried the Lora in something like foocus? Just to narrow it down to the Lora specifically
@scottmaplesofficial
@scottmaplesofficial 8 месяцев назад
v22.4.1 doesnt have the dreambooth folder setup in tools any idea?
@scottmaplesofficial
@scottmaplesofficial 8 месяцев назад
nevermind, the newest version puts dreambooth folder prep in LoRA-training-Dataset prep
@FlorianConradi-zo6os
@FlorianConradi-zo6os 4 месяца назад
i want to marry you
@personmuc943
@personmuc943 7 месяцев назад
My "gaming PC" has 4gb vram, will it be able to train Lora? or do I need to upgrade my GPU to be able to?
@allyourtechai
@allyourtechai 7 месяцев назад
Unfortunately I don’t think it will be possible. You can train lower resolutions on 8GB and it’s possible to train an SDXL with 12GB, but anything less will be a problem.
@personmuc943
@personmuc943 7 месяцев назад
@@allyourtechai Thanks a lot for the info! I really needed this reply because I was about to install it tomorrow, but your warning came just in time! Also. I've read online that google colab can train loras for low vrams. Do you recommend I try that, or is it just a waste of time?
@allyourtechai
@allyourtechai 7 месяцев назад
I’m putting together a guide on using online services for training. That should help you get going without needing a high memory gpu.
@personmuc943
@personmuc943 7 месяцев назад
@@allyourtechai Sounds awesome! Looking forward to checking it out. Already subbed 👍
@chrisrosch4731
@chrisrosch4731 8 месяцев назад
Learned a lot from this. Thank you! Have you done Dreambooth training or plan to do a video on that soon? :)
@allyourtechai
@allyourtechai 8 месяцев назад
I have in the past and would be happy to do a video! Thank you so much for watching :)
@ChakChanChak
@ChakChanChak 4 месяца назад
Thank you, Tech Tom Cruise
@allyourtechai
@allyourtechai 4 месяца назад
Any time! lol
@TheRMartz12
@TheRMartz12 6 месяцев назад
is having 64 images for training too much? I'm using that many cause they're not as good quality of the subject or they have glasses on or bangs, hair etc covering the face
@allyourtechai
@allyourtechai 6 месяцев назад
I typically find that 10 or so images are enough. Higher quality, higher resolution would be the best choice. Not always possible of course in which case more images may help while also providing a more flexible model. Make sure the elements that are different are in the annotations associated with each image. If they have bangs in some images, make note of that so you can later say “xyz person with bangs standing on the beach”
@TheRMartz12
@TheRMartz12 6 месяцев назад
@@allyourtechai thank you! will try that
@joeschell3366
@joeschell3366 6 месяцев назад
I followed your instructions and when I clicked on Start Training I am getting the following error right at the start and the training stops kohya_ss\venv\lib\site-packages umpy\core\_methods.py:129: RuntimeWarning: invalid value encountered in scalar divide ret = ret.dtype.type(ret / rcount) INFO mean ar error (without repeats): nan train_util.py:856 ERROR No data found. Please verify arguments (train_data_dir must be the train_network.py:213 parent of folders with images) / any insights?
@joeschell3366
@joeschell3366 5 месяцев назад
nevermind, figured it out
@FuLingYou
@FuLingYou 5 месяцев назад
Do you mind sharing how you solved it? I'm having similar errors@@joeschell3366
@Tantrum453
@Tantrum453 8 месяцев назад
Im getting (ImportError: cannot import name '_imaging' from 'PIL') error. What should i do?
@allyourtechai
@allyourtechai 8 месяцев назад
At which step are you getting the error?
@allyourtechai
@allyourtechai 8 месяцев назад
The error message `ImportError: cannot import name '_imaging' from 'PIL'` typically occurs when there's an issue with the Pillow library in Python, which is a fork of the PIL (Python Imaging Library) used for opening, manipulating, and saving many different image file formats. This error can happen due to several reasons: 1. **Incorrect or Partial Installation**: The Pillow library might not be installed correctly or completely. This can happen if the installation process was interrupted or if the wrong version of Pillow was installed for your Python version. 2. **Environment Path Issues**: There might be a problem with your Python environment paths, where Python is not able to find the installed Pillow library. 3. **Conflicting Libraries**: If you have both PIL and Pillow installed in the same environment, they might conflict with each other. To resolve this issue, you can try the following steps: 1. **Reinstall Pillow**: Uninstall and then reinstall the Pillow library. You can do this via pip: ```python pip uninstall Pillow pip install Pillow ``` 2. **Check for Conflicts**: Make sure you don't have PIL installed in the same environment as Pillow. If you do, remove PIL: ```python pip uninstall PIL ``` 3. **Verify Installation**: After reinstalling, you can check if Pillow is installed correctly by importing it in a Python shell: ```python from PIL import Image ```
@Asaghon
@Asaghon 4 месяца назад
Nice vid, very similar to the way I train in 1.5. (except wit prodigy). But no min SNR gamma and Noise offset?
@tomboi5973
@tomboi5973 8 месяцев назад
Creating venv... Python was not found; run without arguments to install from the Microsoft Store, or disable this shortcut from Settings > Manage App Execution Aliases. The system cannot find the path specified. The system cannot find the path specified. Python was not found; run without arguments to install from the Microsoft Store, or disable this shortcut from Settings > Manage App Execution Aliases. The system cannot find the path specified. I keep getting this everytime I try to run the setup.bat file. why?
@allyourtechai
@allyourtechai 8 месяцев назад
Do you have any stable diffusion software running on your pc? It doesn’t sound like it, so one of the items you need to install is python. I would just setup invoke ai, automatic1111, or foocus to generate stable diffusion images, and any of those 3 should also help get python installed. I’m also happy to do a tutorial
@metanulski
@metanulski 8 месяцев назад
When you install Python, there is a checkbox ( I think its somthing like "add pyhon to path" ). You need to check this one so all programs can find your python instalation.
@allyourtechai
@allyourtechai 8 месяцев назад
Yes, thanks for calling that out. @@metanulski
@Giastice
@Giastice 7 месяцев назад
Hi, does it matter whether I use Nvidia or AMD graphics cards?
@allyourtechai
@allyourtechai 7 месяцев назад
Performance may be better on nvidia cards, but as long as you have enough vram to fit the model for processing, the end result will be the same!
@Giastice
@Giastice 7 месяцев назад
@@allyourtechai Many thanks. I have one PC with Ryzen 2700X and Nvidia RTX2070 and one PC with Ryzen 5600 and AMD RX6800. But somehow I could not run on both, need to check what of the requirements is missing. There is always a Powershell error for not finding something..
@JieTie
@JieTie 6 месяцев назад
​@@GiasticeDid you manage to start the training? I have rx 6800 xt and 1 iteration takes about 5 minutes, the program does not use the graphics card at all. Somewhere I'm missing a line of code or checking the option to use AMD. I even turned off xformers and still nothing.
@necrolydevlogs3932
@necrolydevlogs3932 8 месяцев назад
Hey i followed every step but i get the error upon clicking training, 'No data found, please verify arguments (Train_data_dir must be the parent of the folder with images
@allyourtechai
@allyourtechai 8 месяцев назад
Check in the video description. I have a couple common errors and solutions toward the bottom. In this case I believe it is the button to prep the training folders and copy them over to the other screen.
@AleixPerdigo
@AleixPerdigo 6 месяцев назад
@@allyourtechai had the same problem and "perp the training folders" button is working! Thanks!
@Bill4LE
@Bill4LE 6 месяцев назад
@@allyourtechai That did it! Beautiful!
@felipeprato
@felipeprato 6 месяцев назад
Thank you for your proactive response to subscribers... I'll try your method soon... but I'd like to know if there's a way to make a personalized "checkponit" with personal photos for Comfy to use... I have several photos of places in mine city ​​you would like to use as a base; Can you point me to a video? How are "checkpoints" created?
@carlajimenez1482
@carlajimenez1482 8 месяцев назад
Can you make a tutorial how to do it on Rundpod?
@allyourtechai
@allyourtechai 8 месяцев назад
You got it!
@PaulRoneClarke
@PaulRoneClarke 6 месяцев назад
Great work. However 2 months on the entire UI is completely different in Kohya and the process of training a LoRA using it has changed such a lot
@allyourtechai
@allyourtechai 6 месяцев назад
Just noticing they have had 4 major revisions in 2 months. Downloading and taking a look at the changes. I’ll see about doing a new video
@PaulRoneClarke
@PaulRoneClarke 6 месяцев назад
@@allyourtechai It's all good. Everything seems to be progressing very quickly
@personmuc943
@personmuc943 6 месяцев назад
@@allyourtechai I hope for an updated tutorial! ♥
@pierruno
@pierruno 7 месяцев назад
How to train it on my 3080?
@allyourtechai
@allyourtechai 7 месяцев назад
These are most important settings to lower VRAM memory for SDXL training: * Choose either Adafactor or AdamW8Bit as optmizer * Train batch size 1 * Memory efficient attention checked * Gradient checkpoint checked * Max resolution: 1024,1024 * Enabled buckets checked, Minimum bucket resolution: 64 , maximum bucket resolution: 1024 * Don't upscale bucket resolution unchecked, Bucket resolution steps 64 * No sample preview or limited sample previews 768x768 max * xformers or sdpa
@arock1999
@arock1999 4 месяца назад
Thank you! Wish GUY hadn't moved things around so much since your video. i'm having a hard time finding the settings you are modifying....maybe if it's worth your time you could do an updated video someday? cheers
@allyourtechai
@allyourtechai 4 месяца назад
I will do an update for sure. So much has changed haha
@frequentsee3815
@frequentsee3815 4 месяца назад
​@@allyourtechai sweet ty
@monolithsoft_guy
@monolithsoft_guy 5 месяцев назад
Great video, but one thing I don't understand: You say to use a celebrity lookalike for "guidance parameter", but in the end, the imagine you created was a blend of yourself and Tom Cruise. What if you want to create images that really look like yourself? Does it work without "guidance parameter" and just use your own name? thx
@zaionDoe
@zaionDoe 5 месяцев назад
It would be more like your own image since this is locally trained and your LORA tell SD that the image you provided is how tom cruise looks like. Its other tom cruise images that it knows from is original training will be used automatically as guidance parameters in terms of perspectives, image tones, color grading, photography style, etc.
@lsdlonyou5321
@lsdlonyou5321 7 месяцев назад
when i start the training, i get this message: "during handling of above exception" Can someone help me?
@allyourtechai
@allyourtechai 7 месяцев назад
Is that the only message?
@lsdlonyou5321
@lsdlonyou5321 7 месяцев назад
its the last message@@allyourtechai
@lsdlonyou5321
@lsdlonyou5321 6 месяцев назад
@@allyourtechaisorry for my late reply, yes only this message but I have another problem... if I try to select the pictures, they are not shown to me in the folder. idk what to do :/
@vadima7636
@vadima7636 8 месяцев назад
Hey, can it be done on 2060 with 6GB?
@allyourtechai
@allyourtechai 8 месяцев назад
Not with SDXL, but you could do a 512x512 resolution LoRA for stable diffusion 1.5 or 2.1. With up scaling you can still achieve some really great results.
@touchdownchef
@touchdownchef 7 месяцев назад
Hello. Your uploaded video lectures have been very helpful. Thank you. I have a question about the video. Is creating a checkpoint model the same as with LoRa? If there are slight differences, what are they? Also, do you have plans to create a tutorial on creating checkpoint models?
@allyourtechai
@allyourtechai 7 месяцев назад
A checkpoint is also called a model (SDXL is a model), so you can think of these as the very large base models that we use to generate images. A LoRA modifies the output of a model, so it’s much smaller and easier to train, plus it has the flexibility to run on top of multiple models. What is the outcome you are hoping to achieve? From there I can make some suggestions, but typically a LoRA is what you want.
@allyourtechai
@allyourtechai 7 месяцев назад
Fantastic questions by the way!!!
@touchdownchef
@touchdownchef 7 месяцев назад
@@allyourtechai I want to create my own character with flexibility using a checkpoint model. I prefer the extensive flexibility that checkpoint models offer over LoRA, so I'm trying to create one. However, I'm facing various difficulties in the training setup using Koyha.
@ExplodeReality
@ExplodeReality 7 месяцев назад
I am looking for help getting started with training an image-to-image AI. I want something where someone can draw a simple image and have it convert to a map.
@allyourtechai
@allyourtechai 7 месяцев назад
You might be able to use Dall-E for that. I have seen a few customGPT's that convert a drawing into an image, so i'm sure a custom prompt could help create a map from a sketch as well.
@ExplodeReality
@ExplodeReality 7 месяцев назад
@@allyourtechai Thanks, I'll look into that.
@iwonariot5552
@iwonariot5552 6 месяцев назад
The best tool for converting drawings to images in Stable Diffusion is called "Controlnet".
@dwainmorris7854
@dwainmorris7854 4 месяца назад
Can you create a LORA for costume design?
@renealbrechtsen9743
@renealbrechtsen9743 6 месяцев назад
Does this work with amd gpu's ?
@polipano
@polipano 8 месяцев назад
Brian, thanks a lot for this awesome tutorial, in about 25 hours my GPU will provide me with 10 new 1.7G models. Can pls give us the links for the great photos of yours you show us here. I'd love to create similar photos of myself. Again, thank you and pls keep up the good work.
@allyourtechai
@allyourtechai 8 месяцев назад
I’ll post some examples with prompts shortly!
@polipano
@polipano 8 месяцев назад
@@allyourtechai Thanks Brian, looking forward to see these soon.
@jack_lion
@jack_lion 8 месяцев назад
Thanks for the great video! A lot of the results that i'm getting by following your method resemble the celebrity look alike or resemble the civit prompt more-so than the images of myself. I will continue experimenting of course. But I was wondering if you had any tips to make the images resemble the subject more? Additionally, is there a way to generate multiple images with a different variation or seed for each Lora model? Also I noticed that the "No-Half VAE" option was not present in my UI.
@allyourtechai
@allyourtechai 8 месяцев назад
Couple ideas. In automatic1111 when you are doing the X plotting, instead of setting a static seed just ensure -1 is in the seed box. This will get you a random seed for each Lora file/image. You can also open the generated text files from the auto captioning step and find key words to apply in order to get results that resemble the original model more. For mine I use “bald” in my prompt since I have a shaved head. Some of the civitai prompts definitely work better than others though, so definitely experiment. I typically find the my 4th Lora seems to offer a good blend of precision and flexibility.
@guihermenegri6742
@guihermenegri6742 6 месяцев назад
I also cant find no-half VAE option@@allyourtechai
@haroldhankerchief6056
@haroldhankerchief6056 4 месяца назад
thanks for telling me its only going to take 40hrs right at the end.
@frequentsee3815
@frequentsee3815 4 месяца назад
That's your pc hardware.
@kimjongoof5000
@kimjongoof5000 6 месяцев назад
Will you make a tutorial on trainign the AI on a specific cartoon character?
@Ace7-xc2rr
@Ace7-xc2rr 6 месяцев назад
it still works for sdxl training with a 1060 6gb but it takes 5 hours for 100 steps so really an overnight thing and only 512x512 but its still better quality looking than sd1.5
@allyourtechai
@allyourtechai 6 месяцев назад
That’s not bad at all actually!
@freneticfilms7220
@freneticfilms7220 7 месяцев назад
I heard people doing like up to 100 repeats for a character. Also they said you need the exact amount of reference pictures as the amount of steps you use. Why that is so stays a mystery to me, too.
@allyourtechai
@allyourtechai 7 месяцев назад
YOu can increase the repeats, but it also increases the training time assuming all other settings remain the same. I haven't seen a large enough difference to justify the added training time.
@youtubeccia9276
@youtubeccia9276 4 месяца назад
which source model did you used for training?
@allyourtechai
@allyourtechai 4 месяца назад
SDLX
@youtubeccia9276
@youtubeccia9276 4 месяца назад
@@allyourtechai did you meant this one: sd_xl_base_1.0.safetensors ?
@-Belshazzar-
@-Belshazzar- Месяц назад
@@allyourtechai dyslexic xl :D, no just kidding, great video man
@WackFPV
@WackFPV 6 месяцев назад
Now you gotta make a cascade training video!!!
@allyourtechai
@allyourtechai 6 месяцев назад
I know! I’m working through it :)
@WackFPV
@WackFPV 6 месяцев назад
@@allyourtechai tell me about it, I just found their training page on their GitHub. But it doesn’t talk about captions…
@AL3XFPV
@AL3XFPV 6 месяцев назад
I dont have the "Dataset Preparation" tab, but all the others show 🤔
@AL3XFPV
@AL3XFPV 6 месяцев назад
its moved to the main LoRa - Training - tab on the bottom
@allyourtechai
@allyourtechai 6 месяцев назад
I need to update the tutorial again. They are doing a major release every couple weeks it seems haha
@zaionDoe
@zaionDoe 5 месяцев назад
@allyourtechai Cant wait to see your new vids Brian, eagerly waiting for your stuff on your Patreon channel
@cello_rl
@cello_rl 6 месяцев назад
the only think that didnt made this video perfect. was that you didnt used a dark theme to it.
@allyourtechai
@allyourtechai 6 месяцев назад
Haha, next time!
@gigi7237
@gigi7237 7 месяцев назад
Hi, Wonder if this will work on Mac?
@allyourtechai
@allyourtechai 7 месяцев назад
Should work on an M2 or M3 fairly well.
@MelodiK_
@MelodiK_ 8 месяцев назад
im having this issues when clicking start training ERROR The following folders do not match the required pattern _: ERROR Please follow the folder structure documentation found at docs\image_folder_structure.md ... ERROR The following folders do not match the required pattern _: ERROR Please follow the folder structure documentation found at docs\image_folder_structure.md ...
@allyourtechai
@allyourtechai 8 месяцев назад
If you click on the TOOLS tabs and fill in the info there (location dataset images, reg images if you have any and then enter a folder in the DESTINATION TRAINING DIRECTORY box and click PREPARE TRAINING DATA it'll create the right structure and move all the files into there for you. Then click COPY INFO TO FOLDERS TAB to make sure you have the right info in the right place.
Далее
LORA Training - for HYPER Realistic Results
36:20
Просмотров 99 тыс.
Вопрос Ребром - Булкин
59:32
Просмотров 786 тыс.
Stable Diffusion - LoRA
1:08:17
Просмотров 194 тыс.
Why the PS5 Pro is $700
9:41
Просмотров 139 тыс.
Adobe's First Real Competition
23:19
Просмотров 511 тыс.
How To Make AI Images Of Yourself (Free)
19:50
Просмотров 144 тыс.
LORA training EXPLAINED for beginners
27:33
Просмотров 102 тыс.
How AI 'Understands' Images (CLIP) - Computerphile
18:05
Run your own AI (but private)
22:13
Просмотров 1,4 млн