Тёмный
No video :(

Style2Image in ControlNet (T2I) 

Sebastian Kamph
Подписаться 148 тыс.
Просмотров 72 тыс.
50% 1

Опубликовано:

 

27 авг 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 183   
@sebastiankamph
@sebastiankamph Год назад
The FREE Prompt styles I use here: www.patreon.com/posts/sebs-hilis-79649068
@chrisdixonstudios
@chrisdixonstudios Год назад
Dude, you are surfing the wave of Stable Diffusion tubularly in an endless summer on a perfect wave 🌊. Thanks for keeping us up to speed 🚤
@sebastiankamph
@sebastiankamph Год назад
Happy to be along for the ride! 🏄‍♀
@theaiplaybook
@theaiplaybook Год назад
I totally agree with you. With his videos, it's much easier to stay updated.
@marekpietrak8279
@marekpietrak8279 Год назад
Sounds like a prompt "surfing the wave of Stable Diffusion tubularly in an endless summer on a perfect wave"
@chrisdixonstudios
@chrisdixonstudios Год назад
@@marekpietrak8279 Yes, it is! Here is Sebastian ..having fun navigating for us all: Dude, you are surfing the wave of Stable Diffusion tubularly in an endless summer on a perfect wave 🌊. Thanks for keeping us up to speed 🚤 Steps: 20, Sampler: Euler a, CFG scale: 7, Seed: 683630813, Size: 768x768, Model hash: ad2a33c361, Model: v2-1_768-ema-pruned Orrr a little more like young Bob Ross with your quote>>>> surfing the wave of Stable Diffusion tubularly in an endless summer on a perfect wave Steps: 20, Sampler: Euler a, CFG scale: 7, Seed: 10228519, Size: 768x768, Model hash: ad2a33c361, Model: v2-1_768-ema-pruned Imagine the coffee house talk from a group of A.i. enthusiasts and their vernacular after a few cups of espresso 🤩🍮🍻
@sheriff2077
@sheriff2077 Год назад
the dad jokes are evolving faster than the A.I itself
@sebastiankamph
@sebastiankamph Год назад
Gotta fall back on something when AI takes over.
@chrisdixonstudios
@chrisdixonstudios Год назад
@@sebastiankamph soo what did da nuclear scientists say when dey finally achieved a safe fusion reaction? ...we got da Stable Diffusion!!! You may use that one anytime 😉
@aiv0t
@aiv0t Год назад
@@sebastiankamph dadjokes by ChatGPT when?
@aiv0t
@aiv0t Год назад
Already started cause I got curious: Why did ChatGPT take up gardening? Because it wanted to become a sage AI!
@sebastiankamph
@sebastiankamph Год назад
😂
@steveschreiner7444
@steveschreiner7444 Год назад
For those who didnt know like my self the multicontrolnet tabs are setup at settings -> controlnet -> and set the Multi ControlNet: slider on two
@middleman-theory
@middleman-theory Год назад
I just get random images with no likeness at all. I'm using A1111 and latest controlnet. Also, I don't see regular words like "depth" and "Style", I get three different versions of depth (depth_leres, depth_midas, depth_zoe) under preprocessor, and for Model I only see (coadapter-style-sd15v1). I tried them all, but nothing works. I'm not putting anything in the prompt or negative, just trying to get it work with the existing image.
@duplicatemate7843
@duplicatemate7843 Год назад
any fix bro? same for me :(
@anatoliysavitskiy6371
@anatoliysavitskiy6371 Год назад
I'm afraid, I also fail to see the resemblance in the final result. The only thing that is preserved from the original picture is the posture.
@rijujakhar8771
@rijujakhar8771 3 месяца назад
same
@lujoviste
@lujoviste Год назад
Does anyone else get some random image instead of chanign styles?
@Herbstleid
@Herbstleid Год назад
Hi, where do I get "clip_vision" ?
@k-1072
@k-1072 Год назад
searching for it as well
@cerspence
@cerspence Год назад
I just come for the jokes I don't even know what stable diffusion is
@Mimeniia
@Mimeniia Год назад
What do you call a horse stable that ensures an even spread of manure odour? Stable Diffusion
@MonologueMusicals
@MonologueMusicals Год назад
I don't know what I'm doing wrong. I get identical images whether the clip_vision style is on or off. It has zero effect.
@42na4ever
@42na4ever Год назад
Check Seed, it shoild be -1
@HikingWithCooper
@HikingWithCooper Год назад
Another day, another leap forward. Thank you for bringing us along!!
@sebastiankamph
@sebastiankamph Год назад
Happy to have you along for the ride! 🌟
@lawrence9239
@lawrence9239 Год назад
I know right? What a time to be alive!
@alenwesker9552
@alenwesker9552 Год назад
So powerful, I was searching for the t2i color adapter, but the other models are way more powerful and useful than that.
@inv_der2350
@inv_der2350 Год назад
For me clip_vision does nothing at all. Seems to be happening to a lot of people. Could you share a solution?
@hentaioniv1167
@hentaioniv1167 Год назад
try to use non Euler(a) sampling method like DDIM for example(works for me).
@lizng5509
@lizng5509 Год назад
Hi, I have same problem. Have you figured it out? Thanks.
@ThePhillShow
@ThePhillShow Год назад
This isn't working at all for me. No idea what's going wrong but i'm pretty much just getting noise. Enable is checked on both images. Has this process changed?
@SoundGuy
@SoundGuy Год назад
I don't see clip_vision of a few others in the preprocessor. what do i do to get them? also i didn't see any yaml files bring downloaded is that related?
@dexter0010
@dexter0010 Год назад
how did you put the add lora and hypernetwork as a dropdown up top?? edit: also where do i find clip vision? i haven't found it yet i have everything downloaded
@JohnVanderbeck
@JohnVanderbeck Год назад
Settings->User Interface->Quicksettings list
@dexter0010
@dexter0010 Год назад
@@JohnVanderbeck thanks! what do i add there?
@JohnVanderbeck
@JohnVanderbeck Год назад
@@dexter0010 just add the name of any settings control you want to be in the top quick bar. Easiest way to find the name of a control is just change the setting then apply and the name of the control will be listed at the top where it shows the changes.
@deema7345
@deema7345 Год назад
yeah,thats pretty fun feature to play with
@gigaganon
@gigaganon Год назад
I did what you did but it gives me absolutely no result, i just get a jumbled mess of textures, not even a shape is left, i don't know what i'm doing wrong
@user-kj9yl3kz7p
@user-kj9yl3kz7p Год назад
I did exactly what you instructed but when I hit render it doesn't work, it gives me a random image Plz help me
@sarpsomer
@sarpsomer Год назад
Please don't do time (frame) skips while editing your video. Because there are lots of tweaks, sliders, dropdown menus; it is really hard for us to follow the video. I had to stop, go 10 secs back stop, etc... to get the workflow. I mean the video is 6:32 but it took me x2 time because of the skips. Don't get me wrong; learning lot from you as well as from this video. Thanks for everything.
@sebastiankamph
@sebastiankamph Год назад
Thanks for the tip! The reason I cut is I don't want people to get bored 😊
@Ur3rdiMcFly
@Ur3rdiMcFly Год назад
@@sebastiankamph Gotta get some more energy in your voice man!
@ahsookee
@ahsookee Год назад
@@Ur3rdiMcFly I disagree, there's already enough RU-vid content with too much energy. I just recently saw someone under a different video of his complement the relaxed presentation style and I agree, it's a lot more soothing watching something like this for technical content.
@TheAiConqueror
@TheAiConqueror Год назад
@@sebastiankamph Never bored 🫡
@marekpietrak8279
@marekpietrak8279 Год назад
@@sebastiankamph We've got the arrow keys if need be :D
@FrancoANioi
@FrancoANioi Год назад
You are the boss, you know that?
@3oxisprimus848
@3oxisprimus848 Год назад
I think he does
@sebastiankamph
@sebastiankamph Год назад
No, you're the boss! 😘
@KkommA88
@KkommA88 Год назад
Once again a useful video! Thanks Seb!
@sebastiankamph
@sebastiankamph Год назад
My pleasure!
@carlosramon6102
@carlosramon6102 Год назад
has anyone got the fp16 safetensors version from webui working? the one from tencent shown in the video works, but the webui version seems to have zero influence on the image generated.
@miguelarce6489
@miguelarce6489 Год назад
Hey, great video! Can't get to work control net on "text2img" it generates me random images, any help?
@aggroaperture
@aggroaperture Год назад
same issue and solution?
@AgustinCaniglia1992
@AgustinCaniglia1992 Год назад
this is simply amazing
@tobiasroth8169
@tobiasroth8169 Год назад
guys i finally discovered the problem a lot of ppl from this comment section had (including myself) - you need to put all control net models to this path "stable-diffusion-webui\extensions\sd-webui-controlnet\models" and NOT to this path -> "stable-diffusion-webui\models\ControlNet" :)
@LeeeroyDex
@LeeeroyDex Год назад
Hello sir, the top of the UI "SD VAE", "ADD Lora prompt", "Add Hypernetwork to prompt" what are these 3 things?
@thedevilgames8217
@thedevilgames8217 Год назад
do you know how to fix CUDA_LAUNCH_BLOCKING=1 from hires?
@tobinrysenga1894
@tobinrysenga1894 Год назад
I was getting much worse results until I switched to the deliberate_v2 model that I noticed you were using. What is that model supposed to be? I couldn't find any info on it, just happened to find it for downloading.
@jurandfantom
@jurandfantom Год назад
small note, canny from T2I works (300mb can replace 700MB), and vision as well, but others looks like not work at all? (i removed color as that one dosn't work as well) do your tests and then delete 700mb files
@SomeAB
@SomeAB Год назад
The original hugging face for this now have a 'co-adapter' .. please explain or do a video on that.
@guycohen1958
@guycohen1958 Год назад
Can you please update this with the latest controlnet 1.1, the naming of the adapters are different now. thank you
@TheAiConqueror
@TheAiConqueror Год назад
Seb the man! 💪🤴🏼
@sebastiankamph
@sebastiankamph Год назад
No, you're the man! Thank you my friend 💲💲💲. Maybe soon I can invest in some stuff to have in the background of the videos too. Did you like the tree? 😁
@TheAiConqueror
@TheAiConqueror Год назад
@@sebastiankamph yes the tree is cool, a bonsai would be cool, would go with your quiet videos. would round off the mood. 😁🫡
@sebastiankamph
@sebastiankamph Год назад
@@TheAiConqueror I love it!
@74mihain
@74mihain Год назад
RuntimeError: shape '[1, 64, 1]' is invalid for input of size 0 🤷🤷🤷
@therookiesplaybook
@therookiesplaybook Год назад
Where can.I find clipvision? I have the latest control net 1.1 and it's not in there.
@JanKadlec
@JanKadlec Год назад
Same.
@zizyip6203
@zizyip6203 Год назад
T2IA
@user-rl4hd2iz6c
@user-rl4hd2iz6c Год назад
I don't know what I'm doing wrong, but I always end up with either b/w or sepia images. Changed the weight and navigation settings on both tabs - does not help (( Maybe it only works on square photos? I just wanted to change the vertical photo 720x1280.
@AZTECMAN
@AZTECMAN Год назад
Not sure if this is helpful: - I was getting black and white images when I used the light direction for the image to imagine - one solution is to increase denoising strength to 90 or 95% - you can also put 'grayscale' in the 'negative prompt'
@user-rl4hd2iz6c
@user-rl4hd2iz6c Год назад
@@AZTECMAN, I didn't use any light sources, like the prompt and the negative. I did everything, as in this video, I put a photo of people and used an anime picture in the second model. As a result, it is not done in the anime, and the colors also disappear ( Here the whole point is to change the style of the original picture without resorting to a prompt.
@LilCurlyBlonde
@LilCurlyBlonde Год назад
How did you get the ControlNet tabs to be side by side instead of one after another ? It would really help a lot in terms of space on the page and keeping everything neat and in focus.
@sebastiankamph
@sebastiankamph Год назад
Update to the latest version (I show how in the video)
@LilCurlyBlonde
@LilCurlyBlonde Год назад
@@sebastiankamph Thank you, it's a God's sent that they thought about it.
@flyashy8397
@flyashy8397 Год назад
I am trying to follow this tutorial to the T but all I get are random images that have no resemblance to the two control images. I have a person and a comic style as the two control images and I get landscapes etc as the generation result. It is as if SD is ignoring the controlnet altogether and generating promptless images. Any idea what could be going wrong?
@sebastiankamph
@sebastiankamph Год назад
Honestly, regular ControlNet models are more consistent than T2I. So can try working with those also.
@flyashy8397
@flyashy8397 Год назад
@@sebastiankamph Thank you so much! I'll try those out. Cheers!
@76abbath
@76abbath Год назад
Thanks a lot for the video! Your channel is very good!
@sebastiankamph
@sebastiankamph Год назад
Thank you kindly! 😘
@matthewma7687
@matthewma7687 Год назад
Great sharing, I want to follow this video to do it again but found that I don't have the clip version, how can I get it, is the clip version integrated in the latest sd webui contraolnet?
@matthewma7687
@matthewma7687 Год назад
Reinstalled sd webui contraolnet, It was fixed. thank you
@riccardobiagi7595
@riccardobiagi7595 Год назад
@@matthewma7687 Hi! Can I ask you how you reinstalled sd webui controlnet? I don't want to mess up :D
@TMaekler
@TMaekler 6 месяцев назад
Nice. Wondering if there is a Style Adapter for SDXL? Couldn't find one anywhere...
@xellostube
@xellostube Год назад
I have 2 problems: 1 - I get black and white creations 2 - The pose is similar but the character is unrecognizable (I'm trying to use this technique to sylize a couple of portraits but the person in the creation is way more different)
@killabook
@killabook Год назад
I have exactly the same problems. does not look like the photo
@K-A_Z_A-K_S_URALA
@K-A_Z_A-K_S_URALA Год назад
не работает!
@lucan42
@lucan42 Год назад
is there a way to input a directory and let it calculate for more frames automatically, just like in the normal img2img?
@androidgamerxc
@androidgamerxc Год назад
i am getting unknown error on all of the extensions
@sebastiankamph
@sebastiankamph Год назад
I recommend you remove them and reinstall from the extensions tab. And update to latest auto1111
@edwhite207
@edwhite207 Год назад
Great videos and jokes! Where did the aspect ratio buttons come from?
@sebastiankamph
@sebastiankamph Год назад
You can find that extension in the extensions tab. Aspect ratio something something.
@deema7345
@deema7345 Год назад
also used to mess around with it in img2img
@42na4ever
@42na4ever Год назад
For some reason, it doesn’t work for me as well as it does for you, although I repeated all the steps exactly to the point. If you leave everything as you have, then the pose is taken from the second picture, and not from the first, where the depth is turned on, if you reduce the impact force of the second picture, then the style disappears. Unclear :(
@sebastiankamph
@sebastiankamph Год назад
I had lots of issues when testing this so I am not surprised 😅
@herval
@herval Год назад
same here
@Mocorn
@Mocorn Год назад
I must admit, I'm having some problems making use of this style transfer. I'm possibly going about this completely wrong but I'm starting with an image of a person and want to apply a style but retain the likeness of the person. I feel like the likeness gets lost in the process.
@sebastiankamph
@sebastiankamph Год назад
You'll have to play with the guidance start setting. It is however, very finicky.
@Mocorn
@Mocorn Год назад
@@sebastiankamph yeah, I played around with this some more after my comment and got closer but I agree, it is quite finicky.
@didiernaimdefli
@didiernaimdefli Год назад
you are the boss
@EvaKaza
@EvaKaza Год назад
sorry where did you get clip vision from?
@Rscapeextreme447
@Rscapeextreme447 Год назад
Amazing!
@WillFalcon
@WillFalcon Год назад
there is no "clip vision"
@Tymon0000
@Tymon0000 Год назад
Why did ChatGPT decide to join a gym? It wanted to get better at processing weights.
@paolovolante
@paolovolante Год назад
Thank you for your videos. I'm a Mac user and so I'm out of the games because all developments are done in Windows or Linux (I suppose). I have a Colab payed subscription, though. Is there a maintained ControlNet/Stable Diffusion implementation I can remotely use, as far as you know?
@ahmedsiha
@ahmedsiha Год назад
0:33 I got chills. Anyway great content as usual, thank you.
@sebastiankamph
@sebastiankamph Год назад
Glad you enjoyed it! 😊😘
@coda514
@coda514 Год назад
What did the grape say when it got crushed? Nothing, it just let out a little wine. Seriously, the tools at our disposal are unbelievable.
@sebastiankamph
@sebastiankamph Год назад
Hah, that's a good one 😁
@AlphaNature
@AlphaNature Год назад
Thanks
@evelynintrance
@evelynintrance 11 месяцев назад
hey, the 1.5 models (canny/depth ext) are much smaller in this download location than the other location you referred to in another controlnet video. what is the difference? will they work the same if i get them all from here?
@_perp
@_perp 3 месяца назад
i get this error no matter what i try 'AttributeError: 'dict' object has no attribute 'shape'" any ideas?
@Valerija.M
@Valerija.M Год назад
Where did the preprocessor files come from? They don't exist and they didn't appear
@SoundGuy
@SoundGuy Год назад
i have the same problem
@TheAlgomist
@TheAlgomist Год назад
I choose YOU, to teach me this. Thank You 🙏
@sebastiankamph
@sebastiankamph Год назад
Haha, you're welcome! 😁
@takeanappan
@takeanappan Год назад
Hi, anyone got this issue "RuntimeError: "LayerNormKernelImpl" not implemented for 'Half'" ?? : ((
@takeanappan
@takeanappan Год назад
everything works fine until I update ContrlNet extension.. now couldn't even run webui...
@pixeljauntvr7774
@pixeljauntvr7774 Год назад
Do the images you feed Contronet need to specifically be PNG files with SD data embedded? Or does any old jpg work?
@marvin6844
@marvin6844 Год назад
Is there a way to get it to act more like a filter, rather than a blend of the two images? I'd like to be able to maintain the exact likeness of a portrait while applying the new style onto it. For example, If i uploaded a photo of my face and used a black and white manga drawing as the style reference. It will morph my face. I just want it to look exactly like me but drawn using a manga pen. Is that possible?
@Joe-ce6cc
@Joe-ce6cc Год назад
Use this way Simply start with a fresh U.I, put your starting picture in the img2img tab, type in some promps about what u want, oil painting or whatever, play with the settings and voila. If u want your picture to be in the SAME style as your drawing, u gotta train a modele using multiples drawing of yours so the I.A can understand your style, then apply that lora to your promps on top of your original photo in the img2img and start generating Same process for doing slideshow videos
@jon2478
@jon2478 Год назад
@@Joe-ce6cc Do you know what model is best for this?
@johncressmanci
@johncressmanci Год назад
I have been waiting for something like this! I just wished it was a little better and consistent.
@macbetabetamac8998
@macbetabetamac8998 Год назад
Does it work better with any particular SD models?
@sebastiankamph
@sebastiankamph Год назад
I tried many and it worked well with all I tested. Faces worked best for me.
@TheMaxvin
@TheMaxvin 10 месяцев назад
Super, and what about IP-adapter?
@ufukzayim6689
@ufukzayim6689 Год назад
clip_vision does not appear on the list.What can I do?
@zizyip6203
@zizyip6203 Год назад
T2IA clip vision
@arothmanmusic
@arothmanmusic Год назад
For some reason my output image doesn't look like the source models. I have a photo of a woman on one side and a painting of a woman on the other, but my output is some seemingly random image that bears no resemblance to either of them... I get animals, landscapes... what am I missing?
@aymanwadi5085
@aymanwadi5085 Год назад
I am using this after 4 months of this tutorial and it is a total failure..... is there something updated here or there that make this not working??
@cyril1111
@cyril1111 Год назад
been trying to play with it for two days but there's a problem with Mac and can't play with it yet :( Opened a bug request on Github and waiting for fix. hopefully soon...
@sebastiankamph
@sebastiankamph Год назад
😥 I feel you
@BadCat667
@BadCat667 Год назад
I can't handle that arning: StyleAdapter and cfg/guess mode may not works due to non-batch-cond inference
@Tymon0000
@Tymon0000 Год назад
The t2 style model works for me, and I tried t2 color: RuntimeError: pixel_unshuffle expects height to be divisible by downscale_factor, but input.size(-2)=257 is not divisible by 8 and t2 canny RuntimeError: pixel_unshuffle expects height to be divisible by downscale_factor, but input.size(-2)=1 is not divisible by 8 Guess for them the workflow is completely different?
@herval
@herval Год назад
I get this w/ some models too (eg t2i sketch), still can't figure out how to fix it
@morfolabs
@morfolabs Год назад
Nice!!!!!!!!!!!!!!!!!!!!!!!
@herval
@herval Год назад
I'm not sure if I'm doing something very wrong - followed the same steps, but the image I get out doesn't have anything to do with the input. It's almost like the first model is getting ignored...
@sebastiankamph
@sebastiankamph Год назад
T2I is really finicky tbh and I'm not sure if it's buggy or not. Sometimes I had to restart everything when my it stoppes for me.
@dadabranding3537
@dadabranding3537 9 месяцев назад
I am having trouble replicating this in ComfyUI. Can you advise? Or anyone ?
@CoconutPete
@CoconutPete 6 месяцев назад
now we have coadapters.. I've heard they are better than t2i
@havemoney
@havemoney Год назад
Controlnet0 + Controlnet01, not werk with amd :(
@rjhfsv8564
@rjhfsv8564 Год назад
Bummer, Not sure why but its not downloading the yaml files with the models and I can't see them anywhere. Any thoughts on finding/getting them?
@sebastiankamph
@sebastiankamph Год назад
Try to start everything up and test a render and you should get them.
@talessin
@talessin Год назад
what you using for ratio buttons for sizes?
@BassFuckingBlowRE
@BassFuckingBlowRE Год назад
I am doing something wrong. The style is not being applied as it happens in your first examples. I can't even get the cartoonish style applied. I am rewatching this video for the 5th time.
@sebastiankamph
@sebastiankamph Год назад
T2I is so finicky. Honestly, just use the regular ControlNet models (depth & canny) and learn them and you'll get more consistent results.
@BassFuckingBlowRE
@BassFuckingBlowRE Год назад
@@sebastiankamph Thank you, ma dude! I will
@larryboles5064
@larryboles5064 Год назад
I'm trying this out and not having much luck with it. The results tend to be pretty terrible. I wonder if it might be because I'm using the safetensors pruned controlnet models instead of the full size ones.
@joseluisdelatorre3440
@joseluisdelatorre3440 Год назад
For image generation is the same the big models are for training and merging.
@kernsanders3973
@kernsanders3973 Год назад
The examples in your thumbnails are fantastic but the examples you produce in the video looks almost as bad as the ones im getting. Would have rather wanted to see how you accomplished the examples in your thumbnail. So far just like your examples you are producing, its just producing a mess on my side. Not really transferring style. Almost better to just train a lora model and use normal img2img with controlnet is a much better option than this. Unless im missing something and there is a secret setting for the examples in the thumbnail to actually get something that is decent.
@Silversith
@Silversith Год назад
Wouldn't that work well for consistent characters?
@deimantassmeledis7567
@deimantassmeledis7567 Год назад
Is it just on people faces or can you do it on animal, objects as well?
@sebastiankamph
@sebastiankamph Год назад
You can do it on anything, but I found that faces provided good consistent results.
@juanom2903
@juanom2903 Год назад
Anyone knows where is the Restart button? Thank you all!
@chariots8x230
@chariots8x230 Год назад
It’s interesting, but it seems to change the details of the character’s appearance a bit too much when changing the style. For example, the hair is one color in the original image, but on the output image, the hair contains multiple colors. Also, the outfit becomes different too. And the background seemed to change as well.
@cesar4729
@cesar4729 Год назад
All that have very easy solutions tbh.
@sebastiankamph
@sebastiankamph Год назад
You can work around that in multiple ways. One would be to prompt things in, which is probably the quickest if that works for your particular image.
@MABtheGAME
@MABtheGAME Год назад
hey mate, I m getting random images not like yours, completely random
@oceaco
@oceaco Год назад
Doesn't work for me
@Oxes
@Oxes Год назад
can you get a google collab version for this style to image?
@ShawnFumo
@ShawnFumo Год назад
If you have a simple colab that copies the original ControlNet models, should be able to copy paste the new ones too.
@thirdshift7976
@thirdshift7976 Год назад
Good videos man, but that headrest is giving Stephen Hawking's vibes.
@sebastiankamph
@sebastiankamph Год назад
That's nice. He was a real mvp 🌟
@arothmanmusic
@arothmanmusic Год назад
Oh, good - I'm not the only one. I sort of assumed Sebastian was in a wheelchair. Not that it would matter one way or the other...
@ackkipfer
@ackkipfer Год назад
What os your system? GPUs specially..
@sebastiankamph
@sebastiankamph Год назад
RTX 3080
@ackkipfer
@ackkipfer Год назад
@@sebastiankamph damm good gpu. My 1060 6gb crawls behind yours
@peterbelanger4094
@peterbelanger4094 Год назад
@@ackkipfer I also have a 1060 6gb, I keep getting 'CUDA out of memory' errors. I can't do this til I upgrade ☹ Can't do any CLIP vision or multi-control net. And slow at everything else. 45sec-1min for a 512x512 I avg 1 iteration a second on 512x512. Can't go above 1280x720 either, can't do full HD.
@K-A_Z_A-K_S_URALA
@K-A_Z_A-K_S_URALA Год назад
respect! из России брат... но у меня не работает(( издевательство
@bryan98pa
@bryan98pa Год назад
First!!
@peterbelanger4094
@peterbelanger4094 Год назад
Now you are last. 😅
@oleksandrshkolnyi2227
@oleksandrshkolnyi2227 Год назад
what abut new video for beginners about set up from scratch ? I mean full set up cuz lot new changes has came and there are lot videos with them. But if u want to set up from scratch u need look for all of them and don't know what video is up to date what isn't anymore. I hope my point is clear. Thanks) I like ur videos)
@sebastiankamph
@sebastiankamph Год назад
I feel you! You could probably get to 95% with my Ultimate guide and then my first ControlNet video. With it all changing so quickly it's hard to make a comprehensive guide.
@oleksandrshkolnyi2227
@oleksandrshkolnyi2227 Год назад
@@sebastiankamph I understand, thanks
@nothappyz
@nothappyz Год назад
Bro can you please turn off caret browsing with F7 on your brave? It's bugging me 💀
@gloorbit5471
@gloorbit5471 Год назад
All I get is naked women. Even when I use your negative sfw prompt.
@blackvx
@blackvx Год назад
What if you have fans who don't want to know anything about AI but just come here for the jokes...😅
@sebastiankamph
@sebastiankamph Год назад
I got you, almost all jokes are in the introduction now. No more hidden jokes inside the videos... or? 😏😘
@Mnmnmnmnmmmnmnmnmnmnmnmnmnmnmn
why does he raise his eyebrows like that? is it an indication of an AI-generated Joke?
Далее
Adobe Firefly Tutorial. BEST AI UI YET!
15:50
Просмотров 69 тыс.
How to AI Upscale and Restore images with Supir.
16:31
LLM in ComfyUI Tutorial
8:36
Просмотров 3 тыс.
One Click SDXL LoRA Training is HERE! Easiest Method
13:03
How to use SDXL ControlNet.
11:16
Просмотров 96 тыс.