"Basically you trying to look at a cloud, and figuring out what the shape actually is" I love this metaphor! Great shorthand for interpreting noise with preexisting training, notions and biases
bro I was trying to make a model/lora in SD for a week, I spent hours learning the basics of programming, I almost gave up, I tried again and there was always an error somewhere, this video saved me and finally I managed to generate the images that I wanted, thank you very much bro, success with your videos, you gained another subscriber
Just wanna say, man... your style is great. These how-to videos are usually a snooze-fest, but you make it entertaining. You'll definitely be my go-to guy from now on.
Honestly Speaking I was trying to train my model back from 2022 December But I failed many times This is one of the most Practical and quick tutorial Thank you Man
Everytime I get to the training program i always get "python3: can't open file '/content/train_dreambooth.py': [Errno 2] No such file or directory" and it never shows that file when i open it through your link
Btw, it is not a good idea to only have simple backgrounds. The simple background will be associated with your character and will negatively affect backgrounds. Some of your images needs have a background, but you need to roughly describe the background in your caption for those pictures. Likewise, you should add a caption for simple background pictures with 'simple background'. This helps the training to associate the background with other tags rather than with your character tag. This also helps because it can now identify that simple flat backgrounds are not associated with your character.
Not really brother sd already is trained on millions of images therefore it already has references for backgrounds and stuff. But whats new to it is you face. Thats why in the training images if its the face thats constantly changing the ai will focus on learning that itself. Your explination is correct when you're training a model in a particular art style. "Anime, Disney etc" I especially in this video only trained my own model. 😊 And btw sorry for replying so late😅.
@@ThatArtsGuySiddhant-tk4jb All good, and good insight. From my tests so far though, with enough regularisation (such as regularisation images, weight decay, max norm, network dropout) the AI will mostly learn what is consistent between images instead of everything about your images. If the background is consistently gray, then your generations will include more grey backgrounds.
I have a labeled dataset means I have a folder consisting of subfolders named according to the type of pattern they consist and another folder for the background so how to train with a dataset which have multiple sub-datasets and I want realistic images like texture of cloth should be generated so which model is best suited
Does this still work? It won't work for me. I'm confused in rhe part where you input pictures. I put them in rhe data folder and then it just fails. How do you do it? You can't press run all?
@ThatArtsGuySiddhant-tk4jb When i started the training, it generated few samples in for the class category as well and when i manually observed these generated images, these were images of person (as expected) however the face was deformed and not well formed and the the sample quality for the person class was bad. Why is that ? will it affect the our trained model output quality ? For better result, should we add the person's images ourselves ? Also, if we no better class keywords which represents the instance object, then should we rather use that for better results ? Say rather than person (in the context of the example you have given), should we use "male person" or "indian male person" for better results ?
There is no need to specify that its an indian person(its mostly going to make you in some random rajistani as it happened to me when I accidentally trained my model with me wearing turben ) rather then that do as i say in the persons folder where all those deformed human type pics are change them with actual human. To be more precise if you are training your own model then change it with pics of people who look like you. (If your from south the pic of south indian man and women, if your from north the pic of north indian man and wormen) just always remember this one thing that your are teaching the ai how a person looks like to be more presise how you look...😊
hlo sir i have a problem, when i open stable diffusion in google colab, then after 5 min i see some problem like "runtime disconnected your runtime has been disconnected due to executing code" pls solve my problem pls
Afer I created a cpk file with your tutorial, I tried adding it to my personal stable diffusion and using it to make promts, but it is not working. How to I transfer that model created by your guidef over to my own stable diffusion?
So as per the info you provided . I think eather you didn’t downloaded the ckpt file properly or the file was too long and it didn't downloaded fully from that colab notebook
Hi! Great video Sid, very informative thank you :) Would you know work with this stable diffusion model via API? I'd like to use an automation software to send it prompts and get back the result. Thanks!
Brother my model is training on me not a particular style. Its not an model trained on indian mens but a Siddhant model. Even if you use this model to generate images thay will have a very close resemblance to me. If thats what you want you can try but if you want a model specifically trained on indian then you have to find it on google brother😅.
@@ThatArtsGuySiddhant-tk4jb 🤣 thank you for the reply! What I meant was I have a modeled that I trained on a specific photo style. I would like to be able to upload a picture and have the model implement that style onto the uploaded picture, not sure if that’s possible. Thanks again!
@@ClaudetteFiguera Brother if you have trained a model then you should get output in that particular style that the very reason we train it. Isn't it😅. 1 Eather you are not prompting it properly. 2 or you have just trained it on a very small data set. Well I'll give you a short cut rather then training a whole new model for a particular style. Use midjourney upload the image which style you want and then use a bot called insight face and with that you might get you desired outcome. Just google it midjourney insight face.😊👍
Thak you for the video , i am stuck at 9:32 the play button returns zero seconds and dosent train the image set . do note my image are 1024X1024 and do not contain a face , i am traying to replacate a searten style of the environment in my data set. any advice ?
Well the colab notebook that I used back then is now having some issues, I would recommend you to try some other colab note book for Stable diffusion there are many if you search for it, they all have similar steps like this one
@@ThatArtsGuySiddhant-tk4jb Thanks my friend... i fixed it.. However the next question is how do i start it again when it times out or when i close my system down ? Do i need to follow the steps as per the initial install each and every time i want to run ???
Brother this colab note book is strictly for straining you own SD model for any other activity like video to animation watch my other video where I turned black widow into a disney character
I tried the shared method 10 times but failed and getting the following error everytime I try: python3: can't open file '/content/train_dreambooth.py': [Errno 2] No such file or directory
@@ThatArtsGuySiddhant-tk4jb btw sir can this ckpt file be converted into tflite format and can it be used as a backend to send text input directly to the model and get an image output? If yes, can you please help me how should i pass the text and how to get the output
Wow Awesome Video - love the edits, cadence and clarity! Really helped me understand diffusion as well! Thank you! Question: If I wanted to be crazy and try to locally install it on my home pc, would I clone it from Git, or download the files? I would love to see this being done!
First and foremost, I recommend not attempting to run Stable Diffusion on your PC. However, if you're interested, I've created a tutorial video. Simply clone the 'Automatic 1111' repository onto your computer and follow the instructions provided in the tutorial. This is advisable only if your PC is highly powerful; otherwise, the experience could be quite frustrating.
You can watch my another video that I made on how I turned a Video into animation. Its all about that only.😊 ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-J3EuLW7phLo.html
Getting this error, however I have uninstalled torch 2.2.2 and installed 2.2.1 using both pip and conda. Conda list only shows 2.2.1 installed, however i keep getting this error.... Any suggestions? torchaudio 2.2.1+cu121 requires torch==2.2.1, but you have torch 2.2.2 which is incompatible. torchtext 0.17.1 requires torch==2.2.1, but you have torch 2.2.2 which is incompatible. torchvision 0.17.1+cu121 requires torch==2.2.1, but you have torch 2.2.2 which is incompatible.
Thanks for this great tutorial. However I'm stuck, after adding my images to the folder and adjusting the max train steps prompt, trying to run it, I get an error message saying "python3: can't open file '/content/train_dreambooth.py': [Errno 2] No such file or directory" Any advice? Thanks
You talk a lot but I really enjoyed your chat, also, your editing style is very fun. ALSO, you're amazing with your results! Thanks! What city are you from?
i am facing error on python code : Traceback (most recent call last): File "/content/train_dreambooth.py", line 18, in from accelerate import Accelerator ModuleNotFoundError: No module named 'accelerate'
No 😅. It's not about which laptop to choose, but how powerful your GPU is. To run SD, you need quite a powerful GPU, which itself could cost between $1,000 and $2,000. Therefore, it might make more sense for you to just get a Google Colab subscription. For a small cost of $10 a month, you can use a world-class GPU. Then, even a $200 PC would work. I myself have been running this whole thing on Google Colab.
Bro for dreambooth only required 3 to 5 images. But I have only 1 image😅(of other person) now how can gather other images of that person he is not famous and he is a small school teacher
@@ThatArtsGuySiddhant-tk4jb bro how to i train a different persons in one model in different GPU because after first training model GPU runtime out after some or using different GPU how to i second person and the photo required is of my principle thanks for your reply please also reply me for this 🙏🙏
Someone who makes a video about training your own AI and then tells people that installing Stable diffusion on a local PC is a bad decision, is clearly a SCAM. Stop making videos, please.
I don't know if someone write you this already but I think when you change the words in # commented lines around 4:56 this will not affect. This lines are there to show you if you want to add more concepts to the training :)