Тёмный

Flux 1 Dev LoRA Training Locally Using AI-ToolKit - Tutorial Guide 

Future Thinker @Benji
Подписаться 49 тыс.
Просмотров 18 тыс.
50% 1

Опубликовано:

 

11 сен 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 92   
@niccolon8095
@niccolon8095 4 дня назад
followed the steps and im only getting 1 .txt file even though I have many images. In the CMD prompt it describes all my images but only saves 1 .txt file that describes 1 random image .. any idea?
@kalakala4803
@kalakala4803 22 дня назад
Omg! This model develop so fast! Just launched CN, Lora training are ready.
@TheFutureThinker
@TheFutureThinker 22 дня назад
Yes it is. Very fast development
@kalakala4803
@kalakala4803 22 дня назад
​@@TheFutureThinkerlooking forward to the IPAdaptor
@SalvadorSTMZ
@SalvadorSTMZ 19 дней назад
Great tutorial. Following your steps I was able to create images of a character I know.
@TheFutureThinker
@TheFutureThinker 15 дней назад
Thanks
@pastuh
@pastuh 20 дней назад
Why would you resize the images if the LoRA trainer is supposed to resize them automatically?
@TheFutureThinker
@TheFutureThinker 20 дней назад
oh really? This AI toolkit resize them? Well, I got used to, just one of the practical steps that I did in SD and SDXL Lora. update: yes, the dataset prep section did mentioned : github.com/ostris/ai-toolkit?tab=readme-ov-file#dataset-preparation
@TheFutureThinker
@TheFutureThinker 20 дней назад
P.S : but if I resize it before train, looks like the script skip the resize steps. And I don't need to wait longer.
@pastuh
@pastuh 20 дней назад
yes, if you experimenting multiple times with same images..
@pastuh
@pastuh 20 дней назад
I think this line is critical: Images with different dimensions will be trained for different aspect ratios. As we know, the assignment is based on one of the longest sides of the image. I believe that different dimensions will result in a different final outcome. (More combinations without overtraining) For example, a 768x1024 resolution might produce one result, while 896x1024 might yield another. This is because the model is trained on a different ratio, potentially resulting in better quality.
@TheFutureThinker
@TheFutureThinker 20 дней назад
That's why in new IPAdapter, they mentioned 50k steps with 512, and 25k steps with 1024. Maybe if they don't prep dataset like the old way, it will might become better.
@basemmgtow7954
@basemmgtow7954 23 дня назад
is there any chance i can do that with 12 vram?
@saurabhsswami
@saurabhsswami 21 день назад
yeah! same :(
@wereldeconomie1233
@wereldeconomie1233 20 дней назад
Yes, burn your computer. If you want. How stuipd people are when things already define 24 GB Vram is the bottom line.😂😂😂
@oblivionmad82
@oblivionmad82 20 дней назад
No
@zazaza2217
@zazaza2217 15 дней назад
@@wereldeconomie1233 lol, you cant burn your gpu just because you dont have enough vram, why you are so "smart" i cant understand script will fail with oom, thats all
@maxh8574
@maxh8574 12 дней назад
@@wereldeconomie1233 Plenty of people are training flux loras with 12GB vram, maybe research before dismissing things lol
@michaelNguyen914
@michaelNguyen914 22 дня назад
Thank you so much br, keep on fire!
@TheFutureThinker
@TheFutureThinker 22 дня назад
Thank you🙏
@Beauty.and.FashionPhotographer
@Beauty.and.FashionPhotographer 20 дней назад
what was used to to your final animation, image to video, where she walks ?
@lennygarcia3059
@lennygarcia3059 15 дней назад
Yes please. Would like to know too.
@CyberPhonkMusic
@CyberPhonkMusic 21 день назад
I did LORa training for FLUX, when I put more than one person in the image, it duplicates the LORA faces in all the characters that appear in the image, what can I do to avoid duplicating the faces of the LORA training?
@theaorora4365
@theaorora4365 14 дней назад
can i use my custom flux model on the model path for training? . i dont want to use default model from black forest .
@nickolaygr3371
@nickolaygr3371 19 дней назад
tell me friend do i need to write subtitles if the text encoder is not trained ( train_text_encoder: false # probably won't work with flux)
@TheFutureThinker
@TheFutureThinker 19 дней назад
Some say they don't do captioning and just submit images to the trainer. But I guess that will take longer time to train. I just like to prepare everything nicely for the dataset before train. Also resize image , in other comment, i know the trainer do resizing. But again it take longer time in the whole process.
@VfxVictor
@VfxVictor 16 дней назад
what is the extension for saving images, i would like to eb able to control the file name format and other stuff
@coffeepod1
@coffeepod1 7 дней назад
only for 24gb vram? poorly i got 16gb. will it still work?
@jayrony69
@jayrony69 7 дней назад
how do I copy token into folder?
@FJKMIsotryFitiavanaSiteWeb
@FJKMIsotryFitiavanaSiteWeb 22 дня назад
what if I already have the models tensor locally, and don't want to download them again from HuggingFace
@forifdeflais2051
@forifdeflais2051 22 дня назад
@@FJKMIsotryFitiavanaSiteWeb Puedes editar el archivo .yaml para especificar otras rutas en las están los modelos. Por otro lado, si estas en Windows, también podrías utilizar el comando mklink para crear un vínculo entre distintas carpetas.
@yngeneer
@yngeneer 23 дня назад
yeah, exactly, what if I have 16gb vram + bios enabled ram share with vram another 32bg, so totally 48gb? does that count as a 48gb vram?
@wqeerwqeer1375
@wqeerwqeer1375 19 дней назад
when i start running the training process, i get this error :( "ImportError: cannot import name 'apply_rope' from 'diffusers.models.attention_processor' (C:\Users\Frozen\AppData\Local\Programs\Python\Python312\Lib\site-packages\diffusers\models\attention_processor.py) I've reinstalled everything, but same error
@ojikutu
@ojikutu 22 дня назад
Thanks. Good work. AI-ToolKit repo is not in your description, would have been helpful.
@TheFutureThinker
@TheFutureThinker 22 дня назад
Oh yes sorry, I was too focus on the information forgot to put that repo link. Its updated now, thank for remind. :)
@SoloMetal
@SoloMetal 19 дней назад
Without a eGPU/GPU, can I run it on my laptop? Can you do a video about the rig or requirements prerequisite for all this?
@gateopssss
@gateopssss 16 дней назад
The absolute minimum is 16 GB of VRAM on a gpu, integrated ones are out of question and is not possible whatsoever. I'm guessing some gpu with 24 GB of VRAM, and 64 GB of RAM with NVME storage is a necessity do train lora on flux, otherwise it's going to be a pain with lower requirements or literally impossible.
@bobsapp4119
@bobsapp4119 13 дней назад
1 thing im struggling with is how to copy the access token into the directory. There appears to be no way of copying it from the webpage
@bobsapp4119
@bobsapp4119 13 дней назад
I have managed to copy it when I create it, however its just a string of characters, not a file
@bobsapp4119
@bobsapp4119 13 дней назад
I'm getting error mesage 'No module nemed dotenv' There is not a .env file in the ai-toolkit directory, instead a folder called 'venv' which doesn't appear to be in your directory in the example
@sanducodrin1488
@sanducodrin1488 23 дня назад
What if i have 16gb of vram?
@terriermonisgod
@terriermonisgod 22 дня назад
just use runpod
@sanducodrin1488
@sanducodrin1488 22 дня назад
@@terriermonisgod successfully used runpod last night. Tnx
@wereldeconomie1233
@wereldeconomie1233 20 дней назад
​@@terriermonisgod people are so stubborn. Even you tell them this can't be run in their shitty PC.
@Pauluz_The_Web_Gnome
@Pauluz_The_Web_Gnome 16 дней назад
Hi I am having problems with port 11434 can not access, and there is also no lava:latest model?
@TheFutureThinker
@TheFutureThinker 16 дней назад
Learn how to use it : ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-EQZWyn9eCFE.html I put this link in the description and mentioned it already in the video.
@massibob2004
@massibob2004 21 день назад
Good job man ! Do you know how to setup the yaml to use 2 same graphic cards ? SO I will have device 0 and Device 1.
@TheFutureThinker
@TheFutureThinker 21 день назад
I think this script are supporting 1 gpu in a config. I am not sure. But technically, yes, index start from 0.
@MilesBellas
@MilesBellas 9 дней назад
Simpletuner on Linux next ?
@timothywells8589
@timothywells8589 23 дня назад
Thanks I've been wanting to try a flux lora locally after trying for weeks to train a character lora in sdxl without much success. In sdxl when I use the loras each picture has elements of the original reference images but doesn't really look like the character. And this is even at 10000+ steps and prodigy set to LR of 1 😔
@TheFutureThinker
@TheFutureThinker 23 дня назад
Try this one , the AI toolkit also include other training. And it script base, for me more customizable
@MilesBellas
@MilesBellas 9 дней назад
11:07 Benji plugs in the guitar.
@Veto2090
@Veto2090 12 дней назад
Any advice on running this with 12gb vram?
@TheFutureThinker
@TheFutureThinker 12 дней назад
Very hard to do so. Honestly...
@massibob2004
@massibob2004 17 дней назад
Hello guys, Why do we need a Lora, if we can use contronet or ipadater etc. without a training ? Best quality, speed ?
@jasonwu4262
@jasonwu4262 15 дней назад
More like capability. Controlnet and ipadapter requires you to copy an image, lora lets you generate completely novel images. You can use LORAs to do what control net does but you can't use control net to do what a LORA can do.
@massibob2004
@massibob2004 14 дней назад
Thanks 👍👍
@rageshantony2182
@rageshantony2182 16 дней назад
it take 1.5 hrs for 24 GB VRAM. So if I use 48 GG Quadro , does it decrease the time ?
@TheFutureThinker
@TheFutureThinker 16 дней назад
48GB VRAM? If so, yes it is.
@daetojekf5973
@daetojekf5973 22 дня назад
Comfy refuses to apply my lore as if ignoring it, I do like your video 1v1
@timothywells8589
@timothywells8589 22 дня назад
@@daetojekf5973 for Flux sometimes you need to turn the lora weight really high, when you think it's too high keep going. Some of the ones I've downloaded from civit don't seem to have any effect until 2,3 or even 4 weight. And also maybe a dumb suggestion but make sure you're using the correct base model eg dev for a Dev lora, schnell for well you get the picture 🍀
@TheFutureThinker
@TheFutureThinker 22 дня назад
@timothywells8589 exactly, so by default I use 1.0. it still very little affect from it.
@daetojekf5973
@daetojekf5973 22 дня назад
thanks everybody, i'm not update comfy =) thats was a problem
@jonjoni518
@jonjoni518 22 дня назад
i find it impossible to download the models. it starts downloading the model at 30mb/s and then it goes down to just a few Kbytes and stays at 99%. i have tried with different hugginface tokens (write, read finegrain....). i also leave the .yaml by default except the path where i indicate the directory of my dataset. by the way i have a 14900k 4090 and 128ram and windows 11
@TheFutureThinker
@TheFutureThinker 22 дня назад
Looks like their network is jamming.i got 2 mb/s downloading another AI model.
@jamesluc007
@jamesluc007 16 дней назад
Could you find a solution to this by any chance? I'm facing the same scenario
@pastuh
@pastuh 20 дней назад
Are you going to fine-tune the model? A tutorial would be helpful :)
@TheFutureThinker
@TheFutureThinker 20 дней назад
I was thinking, any suggestion type of image style want to fine tune?
@pastuh
@pastuh 20 дней назад
@@TheFutureThinker I would say deep shadows & hard/harsh light would improve photos. + paper carving style if you want some style :D
@TheFutureThinker
@TheFutureThinker 20 дней назад
So a Lora of this can be train and run with 1Dev
@pastuh
@pastuh 19 дней назад
@@TheFutureThinker I mean, finetune model. Just like with the SD1.5 'realism' models, where they create mixes from each other. Some people focus on mixing, while others create their own models using a large number of images. Or this still not possible?
@bearbro6375
@bearbro6375 21 день назад
What GPU are you using in this video?
@TheFutureThinker
@TheFutureThinker 21 день назад
Nvidia 4090
@RagonTheHitman
@RagonTheHitman 22 дня назад
Why should I train "private" and LOCAL on my PC ... if Huggingface/BlackForestLabs get's my data anyways and knows everything.... And I think, I must be online for training (to start). ?!
@michaelNguyen914
@michaelNguyen914 22 дня назад
But you'll have a model customized according to your personal preferences.
@michaelNguyen914
@michaelNguyen914 22 дня назад
But How can BlackForestLabs access your data if you train locally? The access token in the environment is only for accessing and pulling the model to your pc and ensuring it's not used for commercial purposes.
@crazyleafdesignweb
@crazyleafdesignweb 22 дня назад
Very nice , and fast release a Lora Training.
@TheFutureThinker
@TheFutureThinker 21 день назад
Yes, thanks
@ttgboi6734
@ttgboi6734 17 дней назад
Please make a guide on how to do these in Google Colab free tier
@TheFutureThinker
@TheFutureThinker 16 дней назад
Free Google Colab? No even have enough VRAM to run this.
@technoprincess95
@technoprincess95 7 дней назад
HF_token= xxxx. you missed that it gave errors
@TheFutureThinker
@TheFutureThinker 7 дней назад
Thank you for reminding. Yes , save the .env file, its just a text file. And stored the token key in there
@kallamamran
@kallamamran 22 дня назад
Why two different captioning methods? 🤔
@TheFutureThinker
@TheFutureThinker 22 дня назад
alternative ways, some people like Florence 2, and some don't. Theres many ways to do by the way.
@UnchartedWorlds
@UnchartedWorlds 21 день назад
Flux Capacitor!!!! Benji where we are going we won't need capacitors!!!
@TheFutureThinker
@TheFutureThinker 21 день назад
Alright, lets do some cool stuff again.
@chefodsicakeshop5947
@chefodsicakeshop5947 17 дней назад
no chance in 8VRAM
@gateopssss
@gateopssss 16 дней назад
No chance. It's already a struggle to generate images on 8GB of VRAM (if even possible) but i don't think lora training will be possible on 8GB of VRAM, unless community does some insane optimization, it's a big model. I'm still struggling to find a way to train flux lora with 12 GB of VRAM, let alone 8GB.
@PunxTV123
@PunxTV123 15 дней назад
@@gateopssss did you find it, 12gb vram can train?
@jonjoni518
@jonjoni518 22 дня назад
Me resulta imposible descargar los modelos. comienza a descargar el modelo a 30 mb/s y luego baja a solo unos pocos Kbytes y se queda en el 99%. He probado con diferentes tokens de Hugginface (escribir, leer finegrain....). También dejo el .yaml por defecto excepto la ruta donde indico el directorio de mi conjunto de datos. Por cierto, tengo un 14900K, 4090 y 128RAM y Windows 11
@Point.Aveugle
@Point.Aveugle 22 дня назад
Please Kohya next, they 've got it having good results with 500 steps. All my loras have been good (way better than sdxl and pony) with default settings using ai-toolkit but they take hours to go to 2000-3000 steps. As someone mentioned below fiddle with high weights, 1.5-2 have worked for me. Also comment out most of those samples when training, they take unnecessarily long. I want to bump up learning rate but not sure how high to go; I read somewhere that people have been using 4-e4 but I tried 1-e3 and it wen crazy after 500 steps so I was thinking even that would be too high.
@elizagarcia8799
@elizagarcia8799 18 дней назад
Kohya can do flux loras?
Далее
Самый БОЛЬШОЙ iPhone в МИРЕ!
00:52
Просмотров 1,2 млн
Вопрос Ребром - Булкин
59:32
Просмотров 953 тыс.
"I need AI photos that look like me" - Here's how
22:18
Local Flux.1 LoRA Training Using Ai-Toolkit
15:40
Просмотров 33 тыс.
Midjourney is Free Again... and here's why.
22:40
Просмотров 101 тыс.