Тёмный

Style Transfer Adapter for ControlNet (img2img) 

enigmatic_e
Подписаться 35 тыс.
Просмотров 31 тыс.
50% 1

Very cool feature for ControlNet that lets you transfer a style.
HOW TO SUPPORT MY CHANNEL
-Support me by joining my Patreon: / enigmatic_e
_________________________________________________________________________
SOCIAL MEDIA
-Join my discord: / discord
-Instagram: / enigmatic_e
-Tik Tok: / enigmatic_e
-Twitter: / 8bit_e
- Business Contact: esolomedia@gmail.com
_________________________________________________________________________
Details about Adapters
TencentARC/T2I-Adapter: T2I-Adapter (github.com)
Models
huggingface.co/TencentARC/T2I...
Ebstynth + SD
• Stable Diffusion + EbS...
Install SD
• Installing Stable Diff...
Install ControlNet
• New Stable Diffusion E...

Развлечения

Опубликовано:

 

2 авг 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 83   
@ixiTimmyixi
@ixiTimmyixi Год назад
I can not wait to apply this to my AI Animations. This is a huge game changer. Using less in the text prompt area is a step forward for us. Having two images being the only driving factors should help a ton with cohesion/consistency in animation
@enigmatic_e
@enigmatic_e Год назад
I agree. Definitely makes it easier in some aspects.
@snckyy
@snckyy Год назад
incredible amount of useful information in this video. thank YOU!!!!!!
@enigmatic_e
@enigmatic_e Год назад
No problem 👍🏽
@digital_magic
@digital_magic Год назад
Great video :-) Thanx for sharing
@ramilgr7467
@ramilgr7467 Год назад
Thank you! very interesting!
@BeatoxYT
@BeatoxYT Год назад
Thanks for sharing this! Very cool that they’ve added this style option. Excited for your next video on connecting it with eb synth. I’ll watch that next and see what I can do as well. But damn, these Davinci Deflicker/Dirt removal render times are killing me haha
@enigmatic_e
@enigmatic_e Год назад
I feel you on the deflicker. Sometimes stacking too many is not a good idea 😂
@BeatoxYT
@BeatoxYT Год назад
@@enigmatic_e have you found a good compromise? I tried just one and it wasn’t great. So I stuck with the 3 you stacked after the dirt remover. But 24 hours for a 30 second clip is unsustainable lol
@enigmatic_e
@enigmatic_e Год назад
@@BeatoxYT no not yet. I’m sure another faster alternative will come out soon.
@clenzen9930
@clenzen9930 Год назад
Guidance start is about *when* it starts to take effect.
@CompositingAcademy
@CompositingAcademy Год назад
Really cool thanks for sharing! I wonder what would happen if you put 3d wireframes in the controlnet lines instead of the generated ones, could be very temporally stable
@androidgamerxc
@androidgamerxc Год назад
@2:47 thank you so much for that i was thinking of reinstalling controlnet just because of that
@BoringType
@BoringType 9 месяцев назад
Thank your very much
@koto9x
@koto9x Год назад
Ur a legend
@JeffFengcn
@JeffFengcn Год назад
hi Sir., thanks for making those good videos on style transfer, i have a question , is there a way to change a person's outfit based on a input pattern of picture? using style transfer and inpaint? thanks in advance
@sidewaysdesign
@sidewaysdesign Год назад
Thanks for another informative video. This style transfer feature already makes Photoshop’s Style Transfer neural filter look sad by comparison. It’s clear that Stable Diffusion’s open-source status, enabling all of these new features, is leaving MidJourney and DALL-E in the dust.
@enigmatic_e
@enigmatic_e Год назад
Yea, it’s getting so good!
@HopsinThaGoat
@HopsinThaGoat Год назад
Ahhh yeah
@zachkrausnick5030
@zachkrausnick5030 Год назад
Great Video ! Trying to update my xformers, it seemed to install a later version of pytorch that no longer supports cuda, and the version of xformers you used is no longer available, how do I fix this?
@GS195
@GS195 7 месяцев назад
It turned Barret into President Shinra 😂
@J.l198
@J.l198 Год назад
I need help, when I generate the result is way different than the actual image im using.
@christophervillatoro3253
@christophervillatoro3253 Год назад
Hey I was going to tell you how to get after effects to pull in multiple png sequences and autocross fade them. You have to pull them in and make each ebsynth outfolder it's own sequence. Then right click all the sequences, create new composition, in the menu there is an option to crossfade all the imported sequences. Specify with the ebsynth settings. Voila!
@enigmatic_e
@enigmatic_e Год назад
Ahh ok! I will have to try this! Thank you for the info!!
@SHsaiko
@SHsaiko Год назад
Great video man! I been learning so much from your vids. it might be a rookie quesion but I got stuck on the third controlnet model when you choose clip_vision under the preprocessor, I dont seem to have that option. Is it becuase I have to use a certain version of SD? thanks!
@enigmatic_e
@enigmatic_e Год назад
Thanks! Have you updated everything? Like SD and ControlNet?
@SHsaiko
@SHsaiko Год назад
@@enigmatic_e oops. that solved it! thanks for the help! looking forward to your next vid man, great work :D
@mikhaillavrov8275
@mikhaillavrov8275 Год назад
@@SHsaiko Please describe what have you done, exactly please? I have updated all requirements but still there is no clip_vision on the left drop-down menu
@judgeworks3687
@judgeworks3687 Год назад
love your clear instructions. I'm following along but my system seems to get stalled on the HED and clip vision controlnets. Any tips for when this happens? I keep restarting. Am trying same steps but first only doing one control net at a time to see if it works, and then adding each controlnet after it successfully runs. So far the HED is definitely slow to run. After this will try clip vison/T21style by itself (as one control net tab).
@enigmatic_e
@enigmatic_e Год назад
What does your height and width look like? I ran into a similar problem and had to reduce the size to make certain parameters work. Might be that it can’t handle it
@judgeworks3687
@judgeworks3687 Год назад
@@enigmatic_e 512x512 I found this video (the guy mentioned some issue and how they got fixed) I will watch it later, I attached link below in case of interest. . the controlnet LED tab seems to be an issue. Is there a reason you use 3 tabs of controlnet? I'm testing out one tab of clipvision/t21 alone. It's still running. ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-tXaQAkOgezQ.html
@judgeworks3687
@judgeworks3687 Год назад
@@enigmatic_e which video of yours do you show how to add gitpull into the code? I think it was your video? I need to access my webui-user.bat to add something to the code but I can't recall how to do that. Thanks if you have link to video you made where you showed that.
@enigmatic_e
@enigmatic_e Год назад
@@judgeworks3687 i think this one ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-qmnXBx3PcuM.html
@judgeworks3687
@judgeworks3687 Год назад
@@enigmatic_e yes this was it. Great video. I ended up uninstalling SD and re-installing from the video you sent me. Thank you!
@User-pq2yn
@User-pq2yn Год назад
The color adapter works for me, but the style adapter does not. The Guidance Start value doesn't change anything. The result is the same as when ControlNet is turned off. Please tell me how to fix this? Thank you!
@miguelarce6489
@miguelarce6489 Год назад
Happen the same to me. did you figure it out?
@User-pq2yn
@User-pq2yn Год назад
@@miguelarce6489 the total number of tokens in prompt and negative prompt should not exceed 75
@erdbeerbus
@erdbeerbus Год назад
this is really a cool way to get it ... thank you!did u explain the way to bring a whole img sequence into comfy to get your great 0:20 result? thx in advance!
@enigmatic_e
@enigmatic_e Год назад
No I haven’t. I still need to get into comfy, still haven’t tried it yet.
@tonon_AI
@tonon_AI Год назад
any tips on how to build this with ComfyUI?
@gloxmusic74
@gloxmusic74 Год назад
Nice find bro !! ....yeh consistency is still a problem with video, i find lowering the denoising strength helps but then you lose the style,..its a double edged sword ⚔️
@enigmatic_e
@enigmatic_e Год назад
True, it’s always the struggle.
@Fravije
@Fravije 8 месяцев назад
Hello. What about style transfer in images? I'm looking for information about this but haven't found anything. For example, I want to make a series of images of animals. I have a photo of a tiger, a pencil drawing of a horse, a pencil drawing of a bull (but by a different artist), an ink drawing of a wolf, a watercolor drawing of a cheetah... and I want to transform them so that all these images are done in the same style, like as if they were painted by the same artist. Is there any product that can help achieve this goal?
@enigmatic_e
@enigmatic_e 8 месяцев назад
Unfortunately this tutorial is outdated now. I havent messed around with style transfer lately so I dont know whats a good alternative at the moment.
@iamYork_
@iamYork_ Год назад
Looks like Gen-1 will have competition…
@PlayerGamesOtaku
@PlayerGamesOtaku Год назад
hi, I have created more than 70 images with stable diffusion, and I would like to know how I can transform these photos into a moving animation with the same photos, could you help me?
@enigmatic_e
@enigmatic_e Год назад
Other than using premier pro or any other editing software, I know there are websites that can make your image sequences into videos. I’m not sure which one is a good choice though I’ll have to look into it.
@PlayerGamesOtaku
@PlayerGamesOtaku Год назад
@@enigmatic_e if you create a tutorial, or find the sites you mentioned before, let me know :)
@enigmatic_e
@enigmatic_e Год назад
@@PlayerGamesOtaku working it right actually
@OsakaHarker
@OsakaHarker Год назад
Have you looked at the new Ebsynth_Utility extension to A1111?
@enigmatic_e
@enigmatic_e Год назад
Wait what??
@BeatoxYT
@BeatoxYT Год назад
@@enigmatic_e new enigmatic eb synth utility video incoming
@melchiorao9759
@melchiorao9759 Год назад
@@enigmatic_e Automates most of the process.
@RonnieMirands
@RonnieMirands Год назад
I am not getting great results like you out of the box, i have to play a lot of slides for starting showing, wondering what i am missing here lol
@RonnieMirands
@RonnieMirands Год назад
I follow the instructions from the Aitrepreneur channel and it worked for me.
@enigmatic_e
@enigmatic_e Год назад
happy you figure it out.
@ErmilinaLight
@ErmilinaLight 3 месяца назад
Thank you! What should we choose as Control Type? All? Also, noticed that generating image with txt2img controlnet with given image it takes veeeeery long time, though my machine is decent. Do you have the same?
@enigmatic_e
@enigmatic_e 3 месяца назад
i believe there should be a box you can check that says "Upload independent control image"
@ErmilinaLight
@ErmilinaLight 3 месяца назад
@@enigmatic_e THANK YOU!!!!
@theairchitect
@theairchitect Год назад
i try use this new controlnet extension and get not style in generated result. i remove all prompts (using img2img with 3 controlnets activate: cany + hed + t2iadapter with clip_vision preprocessor), in generating process appears error: "warning: StyleAdapter and cfg/guess mode may not works due to non-batch-cond inference" and generated result appears with not affected style =( frustrating .... i try many denoising strengths in img2img and many weights on controlnet instances without success... not applying the style on final generated result =( try to enable "Enable CFG-Based guidance" in contronet setting too, and still not working =( anyone got this same issue?
@J.l198
@J.l198 Год назад
I need help, having the same issue, when I generate it just generates a random image...
@beatemero6718
@beatemero6718 11 месяцев назад
Why did you provide a Link for style Adapter, but Not for the clip_vision preprocessor?
@user-ec5hh2eq9v
@user-ec5hh2eq9v 4 месяца назад
Hi! I really ask for help, I'm desperate :( Clip Vision preprocessor is not displayed (automatic1111), and I can't find where to download it. What am I doing wrong?
@clenzen9930
@clenzen9930 Год назад
I made a post about making sure you deal with the ymal files, but I think it got deleted because it linked to Reddit. Anyway, there’s some work to be done if you haven’t.
@enigmatic_e
@enigmatic_e Год назад
is it from the stable diffusion reddit?
@mayasouthmoor3339
@mayasouthmoor3339 Год назад
where do you get clip vision even from?
@enigmatic_e
@enigmatic_e Год назад
if its not in the link i provided, it might appear when you update everything
@j_shelby_damnwird
@j_shelby_damnwird Год назад
If I run more than one ControNet tab I get the CUDA out of memory error (8GB of VRAM GPU). Any suggestions?
@enigmatic_e
@enigmatic_e Год назад
have you tried checking the low vram option in controlnet?
@j_shelby_damnwird
@j_shelby_damnwird Год назад
@@enigmatic_e Thank you for responding. Yes, to no avail :-(
@enigmatic_e
@enigmatic_e Год назад
@@j_shelby_damnwird try lowering dimensions, that might help
@j_shelby_damnwird
@j_shelby_damnwird Год назад
@@enigmatic_e Thank you. Currently trying 1024 x 768- Will give a go 768 X 512. I definitely need to grabe me one of those fancy new GPUs :-/
@j_shelby_damnwird
@j_shelby_damnwird Год назад
@@enigmatic_e Will give it a go. Currently trying to output 1024 x 768. Maybe 768 x 512 will do the trick. I really need to grab me one of those new fancy GPUs :-/
@dexter0010
@dexter0010 Год назад
i dont have the clip_vision prerprocessor where do i download it ??????
@enigmatic_e
@enigmatic_e Год назад
Did you update?
@ragdollmaster15
@ragdollmaster15 Год назад
@@enigmatic_e I also can't find it. I did git pull inside the ControlNet folder, reinstalled it 2 times and still can't find it
@ohyeah9999
@ohyeah9999 Год назад
This is can make video and free??? I tried disco difussion, thats like trial.
@dreamayy8360
@dreamayy8360 Год назад
Shows "where to download it" with a list of .pth files.. Then shows his folder where he's got safetensors and yaml files.. Great tutorial.. just making stuff up and not actually showing where or how you installed anything.
@K-A_Z_A-K_S_URALA
@K-A_Z_A-K_S_URALA Год назад
не работает!
@user-nc2hs4rp7l
@user-nc2hs4rp7l Год назад
Ebstynth + SD link
@enigmatic_e
@enigmatic_e Год назад
My bad, just updated link, ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-47HpHOLkIDo.html
Далее
Stable Diffusion - Sketch
31:39
Просмотров 48 тыс.
Best exercises to lose weight ! 😱
00:19
Просмотров 14 млн
Is FLUX better than Midjourney?
10:34
Просмотров 205
Did We Just Change Animation Forever?
23:02
Просмотров 3,3 млн
Multi-ControlNet - How I got consistent videos!
9:24
WOW! NEW ControlNet feature DESTROYS competition!
9:08
Style Selector for A1111 + ChatGPT HACK!!!!
7:13
Просмотров 26 тыс.
Фильм про побег от родителей
0:59
бим бам бум💥💥 типа..
0:18
Просмотров 7 млн
Тут точно до конца😂
0:34
Просмотров 2,2 млн
Ещё и в кредит
1:01
Просмотров 4,2 млн