Тёмный
koiboi
koiboi
koiboi
Подписаться
🤔 Ok, but what IS ControlNet?
25:31
Год назад
I drove a Machine Insane
10:34
Год назад
Комментарии
@dave47663
@dave47663 2 дня назад
Um, what?
@SimplyAzure-sq4ru
@SimplyAzure-sq4ru 6 дней назад
If anyone watching this because you've received the line : 'No module 'xformers'. Proceeding without it.' when running the webui-user.bat file, all you need to do is open the file in notepad++ and modify your set COMMANDLINE_ARGS line to the line below (NOTE: YOU MUST INCLUDE THE QUOTES, it will not work without them): set COMMANDLINE_ARGS= '--xformers'
@siskavard
@siskavard 9 дней назад
Dude, you're awesome, thanks for this
@StunMuffin
@StunMuffin 12 дней назад
The best explaining on the RU-vid🎉❤
@0xjeph
@0xjeph 16 дней назад
LoRA does not add new layers to the original model. Instead, it introduces additional weights in a low-rank decomposition format and integrates them into the existing layers of the model.
@Skyrilla
@Skyrilla 20 дней назад
Retarded hat and profile picture but thanks for the explanation.
@nathanschultz1440
@nathanschultz1440 Месяц назад
$400/$2 per hour does not equal 1,945, its 200, what happened there?
@jasonjuan4768
@jasonjuan4768 Месяц назад
It is great point but in many cases of the ethical issues need to based on the group which at least more than 2 people, group size, timeline, social structure, to the country survival and value system which is multiple levels of complexity. In capitalism, money has been the driving force and the norms of evaluation system to identify value which might not lead to the best outcome as how we had been trained to evaluate ethical standards.
@robrever
@robrever Месяц назад
Is there a way to run Automatic1111 or ComfyUI locally with ControlNet while abstracting the Stable Diffusion layer using an API to Hugging Face? The idea is to run the user interface (Automatic1111 or ComfyUI) and ControlNet locally on my machine while offloading the heavy lifting (the actual image generation by Stable Diffusion) to an API like Hugging Face. I just want to benefit from the flexibility and control offered by ControlNet while not being limited by local hardware for the image generation process.
@movingFive
@movingFive Месяц назад
this is so so so helpful!
@Zoronoa01
@Zoronoa01 Месяц назад
Amazing video, I hope you make more of videos like this!
@crimson5664
@crimson5664 Месяц назад
thanks u helped
@deagzzzshorts
@deagzzzshorts 2 месяца назад
Where did you go dude? Hope you doing good
@cmdr_stretchedguy
@cmdr_stretchedguy 2 месяца назад
I think usage and budget have a big affect on what you should use/have. 1. Even in 2024, there is currently no direct equivalent Stable Diffusion system that works great on AMD GPUs. This is why with SDXL, a 8GB RTX3060 is on par with a 16GB RX7900XT. 2. Not everyone has $500-1200 for a single GPU, and they may be just playing around with it, not using it professionally. This is where a 8GB RTX3050 ($220-260) can be used, taking ~30 sec per image, or a refurbished 12GB 2060 a little faster than the 3050 for less money (under $200).
@sessizinsan1111
@sessizinsan1111 2 месяца назад
Hi mate where is xformers install part? i have this ''no module 'xformers'. proceeding without it.''
@mrrevengaa
@mrrevengaa 2 месяца назад
Amazing stuff, great explanation way better than many tutorials. Thank you for your work and time.
@eyoo369
@eyoo369 2 месяца назад
Hope you’re ok man ❤ Always enjoyed watching your videos
@Prasath-vm1sn
@Prasath-vm1sn 3 месяца назад
Idk maybe it made the model schizophrenic maybe this is how schizophrenia works in humans too maybe learning stable diffusion more and more with the context of human pshyce and thinking process will make literall sense but yeah let's see how it goes but a really great work buddy loving it thank you
@Channel_2020_
@Channel_2020_ 3 месяца назад
you are a god send sir
@codebycandle
@codebycandle 3 месяца назад
found your call to "make something" (and not simply "comment or subscribe") - refreshing. thus, you've earned both, sir.
@KartikayMathur-y8e
@KartikayMathur-y8e 3 месяца назад
thanks man
@Sarah-dj1es
@Sarah-dj1es 3 месяца назад
Wow! This is the best Explanation I have read/heard and I have been looking through alot of papers on Aesthetic personalization. I am a postgraduate researcher writing a paper on this. Is there a way to get in touch soon-ish? my use case is very specific and I could really use some input.
@JunaidAzizChannel
@JunaidAzizChannel 3 месяца назад
Man casually delivers a masters degree course with a research thesis in 20 minutes
@DataScienceGuy
@DataScienceGuy 4 месяца назад
Sooo usefull video, thx!
@derekghh
@derekghh 4 месяца назад
I've been programming for 30 years. This AI brings me back to the early days of computer, when hacking was so much fun.
@PawFromTheBroons
@PawFromTheBroons 4 месяца назад
Having the beach be *Sandy* instead of Aandy, would most certainly make it more beachy.
@Clabear
@Clabear 4 месяца назад
Love your video! great explanation!
@sergetheijspartner2005
@sergetheijspartner2005 4 месяца назад
I would say it is the difference between Straciatella icecream and vanilla ice with chocolate sprinkles, both are vanilla icecream but one type straciatella you trained the whole icecream to be different and whatever you do you will allways get straciatella and it would be a hell of a job to only get vanilla out of that box, where as regular vanilla, you can add the chocolate sprinkles and add it to where it is satisfactory to you or just omit them according to your taste, but it's a lot faster, because you just add a bit to the big model without losing the big model, whereas in straicatella you change the model to something completely different and you can't finetune it anymore
@haohaocreates
@haohaocreates 4 месяца назад
this is such a great video, would you be able to make a video on IPAdapters?
@АлександрИгнатов-ю9з
@АлександрИгнатов-ю9з 5 месяцев назад
не работает! после марта и обновления на 1.9.0++ много чего пропало в sd.
@ahmadzaka9885
@ahmadzaka9885 5 месяцев назад
Man this video is so helpful. Ty
@JarppaGuru
@JarppaGuru 5 месяцев назад
0:48 yeah current world. let bloat internet and get likes. its youtube that make more and you get worthless wall piece
@jonm6834
@jonm6834 5 месяцев назад
I appreciate the technical explanation but, in all honesty, I boiled it down after a single picture: ControlNet prioritizes image-to-image for denoising the subject, and text-to-image fills in the details.
@baseddoggie
@baseddoggie 5 месяцев назад
I'm not sure the numbers about Dreambooth downloads are accurate. It seems he got that number from the number of "checkpoints" (aka full size models) downloaded from civit ai, but I'm not so sure most of those are made with Dreambooth, a lot (if not the majority) are model merges which is not the same thing. Just thought I'd mention that.
@mat5844
@mat5844 5 месяцев назад
Fkn love this
@ivizlabYT
@ivizlabYT 6 месяцев назад
on training, I got this error: Exception training model: 'type object 'LoraLoaderMixin' has no attribute '_modify_text_encoder''. any thoughts?
@Mika43344
@Mika43344 6 месяцев назад
outdated?
@suryaprasathramalingam2421
@suryaprasathramalingam2421 6 месяцев назад
thanks for the short explanation. Loved it!
@JordanPetersonTech
@JordanPetersonTech 6 месяцев назад
There's a few different ways to remove backgrounds in the stock AUTOMATIC1111 interface. Is there a reason you downloaded an additional script? Are the stock functions not working? Just curious, because while I was previously able to change backgrounds now it seems I get a mixed bag of results, which by no means compare to your time of 11 minutes.. Would be interesting video or just a comment as to why this is the background method you're choosing. Thanks for the howto bro. Peace
@yahyajapan
@yahyajapan 6 месяцев назад
Excellent !!!!
@Nerf_Jeez
@Nerf_Jeez 6 месяцев назад
Niiiiiceee!! Very comprehensive
@aarondavidlewis
@aarondavidlewis 6 месяцев назад
I couldn't follow these instructions because the launch.py file calls a separate modules\launch_utils.py and the setup of that file is different than here...BUT... if I just added the argument --xformers to the webui(.bat) command line, it did the same thing... so just launching SD by typing 'webui --xformers' without the quotes
@MrBottleNeck
@MrBottleNeck 5 месяцев назад
This worked for me thanks! I added 'webui --xformers' in to the file 'webui-user.bat' so it is automatically run when SD is started. As a sidenote, the reason why I needed xformers: My 24GB RTX 3090 was running out of memory in images even with dimension under 500x500, but now I can upscale no problem even 100x size images
@workplaydie
@workplaydie 6 месяцев назад
I really like your videos btw
@workplaydie
@workplaydie 6 месяцев назад
What about the argument that the art is stolen? Stable Diffusion, etc, were trained on images without artists consent.
@tag_of_frank
@tag_of_frank 6 месяцев назад
my intuition says hypernetwork is better than lora. Hypernetwork would have more layers than Lora.
@tag_of_frank
@tag_of_frank 6 месяцев назад
Are they training for a specific sampler if so how?
@edkins1
@edkins1 7 месяцев назад
it is a fat dislike for this kind a tutorial. Looks you are smart enough. But why did you place your talking head in upper left corner ? That is a big fail.