I had a lot of trouble to install properly AUTOMATIC1111, install Dreambooth and make xtensors working ... but after that, your video was perfect!! Thanks for sharing
Thanks for the great tutorial. I was able to successfully run the training on a 3070ti (8GB VRAM) by changing the following: • Leave the "Gradient Checkpointing" checkbox selected • Training Steps Per Image (Epochs) at 40 • Mixed Precision at bf16 [all in the Settings tab] Although it took about 6 hours. I will be running tests and try to get that time down. But at least it worked!
As usual, tutorials don't last long with Stable Diffusion. The interface to Dreambooth has changed and is really worse now than it was in this video. I did manage to find the pieces but it causes numerous errors when trying to train with it. ( Error no file named diffusion_pytorch_model.safetensors). One fix suggested to download the file manually from Hugging face. Did than and the n I get yet ANOTHER error that there are about 100 keys it cant find (and it list ALL of them). I find this VERY common with Automatic1111. Its stability is tenuous because of all the linked parts. One thing updates and the whole thing explodes. think I have to find a better fork of the UI or use something else (kohya_ss or Onetrainer). Still it was a useful tutorial as it shows how the training should proceed. I just need to find a better tool to train with.
Hi, thank you for this tutorial. I think I have a problem filling in the tokens, samples and prompts fields. My version of dreambooth differs from yours and the fields are not the same. Can you help me to see more clearly. Thank you
When choosing the Model Type, I don't have the option for 512x Model. The available options are v1x, v2x-512, v2x, SDXL, and ControlNet. Which one should I choose?
Finally someone made a video of what I was searching! Thank you! Does anyone knows if blending different faces during the training would create a new model?
Thank you very much for the video, but I encountered a problem. Dreambooth button does not appear. I wonder what I can do, I couldn't find a solution anywhere. Could you please help me?
Thanks for the tutorial! What is the maths behind the number of Training Steps per Image? I mean, if I had let's say 100 images, how do I know how many steps would be all right?
Hello, thank you very much for the tutorial, it is the tutorial that has helped me the most so far, since I have had many problems with everything... it is the one with which I have gone the furthest, but when all the photos were uploaded and I given to enter, this error came out; RuntimeError: No executable batch size found, reached zero. Do you know what it could be and how to fix it? I appreciate it a lot. Thanks
Thanks for the answer you are the best, Do you thnik the problem is ram memory or or the graphics card? I have 32g of RAM 93 gb. I will try your recomendation @@DigitalArtGuide-bz3gn, Thanks a lot!
@DigitalArtGuide-bz3gn Dear, I was following your steps to train the model on the Runpod instance, but I alway got the issue: Exception training model: ' Some tensors share memory, this will lead to duplicate memory on disk and potential differences when loading them again: [{'encoder.down_blocks.2.resnets.1.norm2.weight', 'decoder.up_blocks.0.upsamplers.0.conv.weight', 'decoder.up_blocks.2.resnets.2.conv1.weight', 'decoder.up_blocks.3.resnets.1.norm2.weight', 'decoder.up_blocks.3.resnets.2.conv2.weight', 'decoder.mid_block.resnets.1.conv1.weight', 'post_quant_conv.bias', 'encoder.down_blocks.2.resnets.1.conv1.weight', 'decoder.up_blocks.3.resnets.0.norm1.bias', 'encoder.conv_out.weight', 'encoder.mid_block.resnets.0.conv1.bias', 'decoder.up_blocks.0.resnets.0.conv2.weight', 'encoder.down_blocks.1.resnets.1.norm2.weight',...... .....................'encoder.down_blocks.3.resnets.0.norm2.bias', 'encoder.down_blocks.3.resnets.1.norm2.bias', 'encoder.down_blocks.0.resnets.1.conv1.weight', 'encoder.down_blocks.3.resnets.0.norm2.weight', 'encoder.conv_norm_out.weight', 'encoder.down_blocks.1.resnets.0.conv1.weight'}]. A potential way to correctly save your model is to use `save_model`. More information at huggingface.co/docs/safetensors/torch_shared_tensors '. Do you know the reason?
I have a 3070, with the same settings in the video its taking about 10 hours to train. Not sure what is wrong, can see GPU usage at aprox 100% so its being used. Also when i click train it takes about 2 to 3 minutes before it even starts.
Complimenti ottimo video, ma ora dreambooth ha un’interfaccia leggermente diversa dopo l’ultimo aggiornamento, e anche alcune spunte e valori non sembrano avere lo stesso significato… potresti aggiornare il tutorial all’ultima versione??
The tip about using one model for a reference img and the use img2img is really good, Ill try that. Did you use an inpaint model for the learning? What was the GPU usage?
The photos in the video are made exactly with the parameters indicated. For some of them, I used the trick explained in the video, but there are no other “secrets” besides trying and retrying until beautiful images come out.
@@DigitalArtGuide-bz3gn hmmm maybe it's because of the source checkpoint; just tried another from civitai and they look atleast human now. Not stylized like yours tough
I am getting this error: Exception generating concepts: The size of tensor a (9) must match the size of tensor b (4) at non-singleton dimension 1 Any idea what's happening?
Is the only error you get? Or there is something about memory? If memory of gpu is not a problem try opening an issue on the GitHub page linked in the description.
@@DigitalArtGuide-bz3gn I have a 3080 Ti with 12 GB of vram. Also 64 gigabytes of onboard ram. I've done it twice and always come up with the same issue. I'm going to go ahead and restart from the beginning.
Well, I'm afraid your video is now out of date because a new version has been released, it's no longer called Dreambooth but DreamArtist, and the interface and options are completely different.
I mean to say that there are several tools. This video showcases what happens when one of them is used; it certainly doesn't aim to cover all possible scenarios. In the coming weeks, I will create another to achieve similar, if not better, results with the brand-new SDXL
how to fix this error sir ? ImportError: cannot import name 'cpu_offload_with_hook' from 'accelerate' (/usr/local/lib/python3.10/dist-packages/accelerate/__init__.py)
@@Sinekyre14 “In reality, it’s really me. Your observation is curious because it has happened to me more than once that people mistook me for a German.”
7:42 Are you kidding me 25 min for 20 images? I have bloody latest Alienware and on that bloody when I train on 10 images still it takes 3 hours and for 20 images 10 hours. Which super computer do you use? Ah may training per step inages i keep 150 where you keep 60 thats may the reason
4070 12gb (normal not TI). CPU is 5950x but it isn’t used (around 20/30%). Ram 64gb but again not used. Also I use ssd that is faster than normal hd. But 90% of the work is done by gpu. Also, do you use the settings in the video?