Тёмный

Style Transfer Using ComfyUI - No Training Required! 

Nerdy Rodent
Подписаться 51 тыс.
Просмотров 19 тыс.
50% 1

Visual style prompting aims to produce a diverse range of images while maintaining specific style elements and nuances. During the denoising process, they keep the query from original features while swapping the key and value with those from reference features in the late self-attention layers.
Their approach allows for the visual style prompting without any fine-tuning, ensuring that generated images maintain a faithful style.
My personal favourite so far - and yes, it works in ComfyUI too ;)
Want to help support the channel? Get workflows and more!
/ nerdyrodent
Links:
github.com/nav...
github.com/Exp... - WIP
github.com/Exp... - “legacy” (working) version
== More Stable Diffusion Stuff! ==
Install ComfyUI - • Install Stable Diffusi...
ComfyUI Workflow Creation Essentials For Beginners - • ComfyUI Workflow Creat...
Make Images QUICKLY with an LCM LoRA! - • LCM LoRA = Speedy Stab...
How do I create an animated SD avatar? - • Create your own animat...
Video-to-Video AI using AnimateDiff - • How To Use AnimateDiff...
Consistent Characters in ANY pose with ONE Image! - • Reposer = Consistent S...
Installing Anaconda for MS Windows Beginners - • Anaconda - Python Inst...

Опубликовано:

 

4 окт 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 79   
@jimdelsol1941
@jimdelsol1941 6 месяцев назад
That one is fantastic !
@NerdyRodent
@NerdyRodent 6 месяцев назад
Yeah, they did really well!
@ultimategolfarchives4746
@ultimategolfarchives4746 6 месяцев назад
Earlier, I installed the nodes but didn't get around to trying them out. Now, you're making me regret not giving them a go! 😂😂
@Steve.Jobless
@Steve.Jobless 6 месяцев назад
Dude, this is what I've been waiting for since Style Aligned came out.
@AustralienUFO
@AustralienUFO 6 месяцев назад
This is what I've been waiting for since DeepDream dropped
@kariannecrysler640
@kariannecrysler640 6 месяцев назад
My Nerdy friend 🤘🥰 seed starting this week for my salad garden 😁
@NerdyRodent
@NerdyRodent 6 месяцев назад
😀
@Main267
@Main267 6 месяцев назад
5:30 Have you seen Marigold depth yet? It's so super crisp and clean for most of the images I threw at it. Only downside is that whatever the base image is it will work best at 768x768, but you can rescale it back up to the base image size after Marigold does its magic.
@attashemk8985
@attashemk8985 6 месяцев назад
Looks better than IPAdapter, cool. Sometimes you don't have a dozen photo with something made from cloud to train style
@andyone7616
@andyone7616 6 месяцев назад
Is there are version for automatic 1111?
@GamingDaveUK
@GamingDaveUK 6 месяцев назад
Couple of years ago there was a website that allowed you to upload an image and apply that style to another image, so you could upload a plate of speghatti and then upload an image of your mate and you had a mate made of speghatti... this reminds me of that, gonna have to add that to comfyui (and fully watch this video) on my day off :)
@unknownuser3000
@unknownuser3000 6 месяцев назад
This looks incredible... I've I don't have to train 100s of hours...
@GfcgamerOrgon
@GfcgamerOrgon 6 месяцев назад
Nerdy Rodent is great!
@swannschilling474
@swannschilling474 4 месяца назад
Damn this went under the Radar!! Gotta test it!! 😊
@craizyai
@craizyai 6 месяцев назад
Hi! Please upload the ControlNet Depth example. The Exponential ML Github has taken the down : (
@mr.entezaee
@mr.entezaee 6 месяцев назад
I could not make this Workfolw from the video. Please put it free if possible.
@edwardwilliams2564
@edwardwilliams2564 6 месяцев назад
If I were to guess, I'd say that the workflow not working as well with the 1.5 version was due to the model used for the style transfer not being trained on 512x512 images.
@ronnykhalil
@ronnykhalil 6 месяцев назад
good jeebus there goes my evening!
@bilalalam1
@bilalalam1 6 месяцев назад
Automatic 1111 forge ?
@NerdyRodent
@NerdyRodent 6 месяцев назад
Give it a few days - it's brand new! XD
@havemoney
@havemoney 6 месяцев назад
@@NerdyRodent Will wait!
@plexatic5558
@plexatic5558 4 месяца назад
Hey there, so I was also confused. For me it didnt work at all when I installed. So I dug into the code and fixed it. I also added some new settings. The code has been merged a while ago, so definitely give it another shot! If you do- Note that there are 3 blocks, each block can use the attention swapping, and each block can be configured to skip the swapping for the first n layers inside it (analogous to the paper). This is cool because it allows you to control just a bit better if there should be a little bit of content leakage, and also if the style should be a bit stronger or weaker. Let me know if you have any issues or suggestions for change!
@NerdyRodent
@NerdyRodent 4 месяца назад
Nice!
@ProcXelA
@ProcXelA 4 месяца назад
didnt work. fix it agane!
@mufeedco
@mufeedco 6 месяцев назад
Great video. Thank you.
@androidgamerxc
@androidgamerxc 6 месяцев назад
what about auto 11111
@dogvandog
@dogvandog 6 месяцев назад
I think something got broken with COmfy ui extension 2 days ago because this is just not working
@unknownuser3000
@unknownuser3000 6 месяцев назад
Not for automatic?
@lmlsergiolml
@lmlsergiolml 6 месяцев назад
Super cool technique! Can someone explain to me where to start? There is so much info, and it's a bit overwhelming for me
@NerdyRodent
@NerdyRodent 6 месяцев назад
Check the links in the video description!
@Pending22
@Pending22 6 месяцев назад
Top content as always! 👍 Thx
@AnnisNaeemOfficial
@AnnisNaeemOfficial 6 месяцев назад
Thanks. I just tried it and am not getting the same results as you. Not even close. Images look mutilated... I've double, triple checked my work and reviewed the github. Seems to me like this is only working in extremely specific scenarios?
@hamtsammich
@hamtsammich 6 месяцев назад
I'm having a hard time getting my head around comfyui. I'm sure it's not all that hard, but I've grown accustomed to the command line, or automatic1111.
@bladechild2449
@bladechild2449 6 месяцев назад
I tried the comfyui workflow from the github page and it didn't seem to do much at all until I realized it mostly seems very reliant on piggy backing off of the prompts, and gets very confused with anyhting beyond basic. If your reference image is vector art and you put in a person's name, it won't take the style at all and just gives a photo of the person.
@contrarian8870
@contrarian8870 6 месяцев назад
@Nerdy Rodent Great stuff. Request: on Patreon, can you release a version with a Canny Controlnet added to the depth Controlnet? I'm not yet at the stage of being able to do this myself...
@NerdyRodent
@NerdyRodent 6 месяцев назад
Sure, I’ll add a canny one too!
@contrarian8870
@contrarian8870 6 месяцев назад
@@NerdyRodent Thank you!
@contrarian8870
@contrarian8870 6 месяцев назад
@@NerdyRodent Wait, I didn't mean replace Depth with Canny (I can do that) :) I meant: adding a Canny Controlnet on top of the Depth Controlnet within the same workflow, so that both are active. That's the part I can't do yet: chaining two Controlnets in one workflow.
@NerdyRodent
@NerdyRodent 6 месяцев назад
@@contrarian8870 oh, for two (or more) control nets you can just chain them together so the two outputs from c1 are the inputs to c2. E.g control net 1 -> control net 2 -> etc
@contrarian8870
@contrarian8870 6 месяцев назад
@@NerdyRodent OK, thanks.
@Rachelcenter1
@Rachelcenter1 2 месяца назад
can you build a workflow that has this style reference in it, for Video to Animation with unsampling?
@DanielThiele
@DanielThiele 6 месяцев назад
Do you have a workflow tutorial, or are you interested in making one, that also generates orthogonal views / model sheets from the initial sketch? I know there is things like char turner but so far it alsways works based on text input only. I assume for you ut's super easy. I'm still a noob with ComfyUI
@MushroomFleet
@MushroomFleet 6 месяцев назад
1:41 "it's a Gundam" :)
@kex0
@kex0 6 месяцев назад
Which is a robot.
@Paulo-ut1li
@Paulo-ut1li 6 месяцев назад
Not working so good on comfy yet :(
@pmtrek
@pmtrek 5 месяцев назад
what extensions have you used for the BLIP nodes, please ? I have installed both comfy_clip_blip_node and ComfyUI_Pic2Story, but none show as yours :/
@blacksage81
@blacksage81 6 месяцев назад
Hm, I can use this to Force my vehicle design generation into sketches for Vizcom, why may give me cleaner results to take into TripoSR, which may give me good 3d reference models. My body is ready.
@twilightfilms9436
@twilightfilms9436 6 месяцев назад
Would it work with batch sequencing for video? How about consistency?
@Omfghellokitty
@Omfghellokitty 6 месяцев назад
import keeps failing and when I try to install the reqs the triton or whatever fails
@nioki6449
@nioki6449 6 месяцев назад
after installation i got "module for custom nodes due to the lack of NODE CLASS MAPPINGS.", can smbdy help with that
@steinscamus8037
@steinscamus8037 5 месяцев назад
Cool, is there an a1111 version?
@NerdyRodent
@NerdyRodent 5 месяцев назад
Hopefully we’ll see something in the coming months!
@waurbenyeger
@waurbenyeger 6 месяцев назад
I've installed the extension using the URL from Git like I've done for every other extension, but I'm not seeing anything new on the interface. I'm also using Forge ... is this only available on the HF website right now or? I'm lost. Where is this suppose to pop up when you install it?
@DemShion
@DemShion 6 месяцев назад
Can't seem to get this to work with XDSL, can anyone confirm that it is still working with the updates?
@MrPrasanna1993
@MrPrasanna1993 6 месяцев назад
How much vram does it require?
@SF8008
@SF8008 Месяц назад
Can this work on Vid2vid?
@SasukeGER
@SasukeGER 5 месяцев назад
do you have this workflow somewhere :O ?
@NerdyRodent
@NerdyRodent 5 месяцев назад
Sure! You can grab this one and more at www.patreon.com/NerdyRodent !
@DemShion
@DemShion 6 месяцев назад
Does this only works with 512x512?
@NerdyRodent
@NerdyRodent 6 месяцев назад
Nope!
@DemShion
@DemShion 6 месяцев назад
@@NerdyRodent Then i must be doing something wrong, when i use a reference image with any other dimensions than 512 by 512 i get an image identical to the one i would get without utilizing the visual style prompting. The idea is extremely cool and the example results both in your video and paper are amazing but for some reason it seems to be a very obscure feature, in the communities im part of most ppl had not even heard of it and are not able to offer assistance troubleshooting.
@NerdyRodent
@NerdyRodent 6 месяцев назад
@@DemShion my guess would be that perhaps you need to update everything?
@mr.entezaee
@mr.entezaee 6 месяцев назад
How to install node types? ImageFromBatch
@mr.entezaee
@mr.entezaee 6 месяцев назад
Essential nodes that are weirdly missing from ComfyUI core.
@mr.entezaee
@mr.entezaee 6 месяцев назад
ImageFromBatch Nodes that have failed
@RahulGupta1981
@RahulGupta1981 6 месяцев назад
How your 3 conditions are automatically getting picked in apply visual style prompting, in my case it's always taking the ref image prompt as positive condition for style prompt and render fire only :), however it's a pretty good one
@toothpastesushi5664
@toothpastesushi5664 6 месяцев назад
doesnt work for most cases
@ultimategolfarchives4746
@ultimategolfarchives4746 6 месяцев назад
Same for me... We need to prompt it extremely well to get good results.
@toothpastesushi5664
@toothpastesushi5664 6 месяцев назад
@@ultimategolfarchives4746i don't think prompting is the problem, it's that it is only seldom able to separate style from subject matter. It works perfectly for origami (as long as you put in one animal and ask for another animal) but in most other cases it won't work (after all it seems like it's based on a hack in latent space, were it to work correctly it would be a major breakthrough and it would be big news by now)
@icedzinnia
@icedzinnia 6 месяцев назад
👍
@Rachelcenter1
@Rachelcenter1 2 месяца назад
Can you post the workflow?
@LouisGedo
@LouisGedo 6 месяцев назад
👋
@BondBacon
@BondBacon 6 месяцев назад
Do facts and logic still destroy carnists?
@LilShepherdBoy
@LilShepherdBoy 6 месяцев назад
Jesus Christ loves you 💙
@kariannecrysler640
@kariannecrysler640 6 месяцев назад
You speak for gods? How special you are.
@lambgoat2421
@lambgoat2421 6 месяцев назад
@@kariannecrysler640 I mean isn't that kind of Jesus' whole thing?
@LilShepherdBoy
@LilShepherdBoy 6 месяцев назад
"For God so loved the world that he gave his one and only Son, that whoever believes in him shall not perish but have eternal life."
@ProcXelA
@ProcXelA 4 месяца назад
didnt work at all
@NerdyRodent
@NerdyRodent 4 месяца назад
You’ll need to install the version used in the video (image, not latent in) as the developer later updated the node… but also broke it.
Далее
Layer Diffuse in Forge WebUI & ComfyUI
7:48
Просмотров 10 тыс.
SDXL LORA STYLE Training! Get THE PERFECT RESULTS!
42:20
Effortless AI to 3D Print: Complete Workflow Guide
13:41