Тёмный

ComfyUI - ReVision! Combine Multiple Images into something new with ReVision! 

Scott Detweiler
Подписаться 54 тыс.
Просмотров 48 тыс.
50% 1

If you caught the stability.ai discord livestream yesterday, you got the chance to see Comfy introduce this workflow to Amli and myself. The idea here is that you can take multiple images and have the CLIP model reverse engineer them, and then we use those to create something new! You can do this with photos, MidJourney images, DreamStudio, or your own local AI art imagery. This is a simple workflow for the ComfyUI and you do not need any custom nodes to make it happen. However, you will need the latest CLIP model, which I will link below.
#stablediffusion #comfyui #revision #sdxl
If you are confused, check out the Comfy SDXL graph basics here: • SDXL ComfyUI Stability...
Grab the CLIP model here from huggingface (OFFICIAL):
huggingface.co/laion/CLIP-ViT...
Stability Official Discord: / discord
#stablediffusion #sdxl #comfyui #img2img
Grab some of the custom nodes from civit.ai: civitai.com/tag/comfyui
Grab the SDXL model from here (OFFICIAL): (bonus LoRA also here)
huggingface.co/stabilityai/st...
The refiner is also available here (OFFICIAL):
huggingface.co/stabilityai/st...
Additional VAE (only needed if you plan to not use the built-in version)
huggingface.co/stabilityai/sd...

Кино

Опубликовано:

 

18 авг 2023

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 149   
@jagsdesign
@jagsdesign 10 месяцев назад
Hello Scott, Super awesome video and simple cool explanations making this workflow delicious to the eye and viewer! Thanks a lot for exploring this in a more educative way so one can keep thinking and improvise the workflow for individual needs ! REVITOLOGY explained cool !
@AIMusicExperiment
@AIMusicExperiment 10 месяцев назад
This is GENIUS! I have been lerping latents to try to accomplish this sort of thing, but this works so much better! Every one of your videos proves to be super valuable, you are an asset to the community! By the way, your little sidetrack about CompyUi being intended as a backend, would have been a great time to shout out the great work that you co-worker "McMonkey" is doing with Stable Swarm!
@sedetweiler
@sedetweiler 10 месяцев назад
So true! I just messaged him today with some questions I had. I will do a video on that for sure!
@My123Tutorials
@My123Tutorials 9 месяцев назад
I really like your style of explanation. At first I was intimidated by ComfyUi but you've done such a good job explaining that I actually can handle this beast of an UI now. So thanks! :)
@sedetweiler
@sedetweiler 9 месяцев назад
Great to hear! Thank you so much!
@francoisneko
@francoisneko 10 месяцев назад
Awesome tutorial! I like that you keep it simple and strait to the point, while teaching advanced techniques. Will be sure to check your next vidéos.
@sedetweiler
@sedetweiler 10 месяцев назад
Thank you!
@hanskloss726
@hanskloss726 10 месяцев назад
Just brilliant! Yesterday I spend about two hours trying to resolve links between modules in other "tutorial" on youtube in which author shows nothing, but placed modules that nothing was visible, without success. Thank you very much for your work in this and other tutorial Scott!
@sedetweiler
@sedetweiler 10 месяцев назад
Glad it helped!
@appolonius4108
@appolonius4108 10 месяцев назад
another great video and easy to follow, nice work!
@sedetweiler
@sedetweiler 10 месяцев назад
Glad you liked it!
@TomekSw
@TomekSw 10 месяцев назад
Great video. Thank you!
@sedetweiler
@sedetweiler 10 месяцев назад
Glad you liked it!
@Neuraesthetic
@Neuraesthetic 6 месяцев назад
you are amazing learning so much from you
@marschantescorcio1778
@marschantescorcio1778 10 месяцев назад
Duuuude! You're throwing out SDXL + Comfy heavyhitters like there's no tomorrow. Very inspirational, more please! Tomorrow I'm biting the bullet and getting it all installed thanks to you.
@sedetweiler
@sedetweiler 10 месяцев назад
Welcome to the dark side!
@marjolein_pas
@marjolein_pas 10 месяцев назад
This is absolutely genius, I was able to combine this workflow with controlnet, prompting and the some finetuning. And I'm finally able to create the image in the pose I want, with the style I want! Thank you so much
@melondezign
@melondezign 8 месяцев назад
Hello, would you share your workflow with CONTROL NET nodes added, I had a bit of rough time trying to add them.
@marjolein_pas
@marjolein_pas 8 месяцев назад
@@melondezign I've replied your mail. Greets
@melondezign
@melondezign 8 месяцев назад
Yes, thank you @@marjolein_pas !
@SamBeera
@SamBeera 7 месяцев назад
Hi would appreciate much, if you can share this workflow with ControlNet. Thank you
@marjolein_pas
@marjolein_pas 7 месяцев назад
How can I reach you? @@SamBeera
@DanielPartzsch
@DanielPartzsch 10 месяцев назад
So good ❤
@sedetweiler
@sedetweiler 10 месяцев назад
Thanks!
@nio804
@nio804 10 месяцев назад
For people who want a nicer UI for ComfyUI, comfybox is pretty nice. I've pretty much built the basic features of A1111 in it, and then some; I have a HR fix that uses the Tile controlnet instead, and the workflows use AITemplate wherever feasible for a frankly ridiculous speedup in gen time. The graph is a mess though. But it works.
@sedetweiler
@sedetweiler 10 месяцев назад
Okay, I checked it out totally expecting to roll my eyes, but WOW dude! You have something pretty fantastic here! If you can add in the images as thumbnails for inputs where an imageload is involved, you have an amazing thing here! I left a comment as a FR on your git. Nice work, and I will do a video on this soon!
@robbdeeze
@robbdeeze 10 месяцев назад
Is there a Google Collab folder for comfybox?
@bordignonjunior
@bordignonjunior 5 месяцев назад
Great content !!!!!!!!!!!
@ATLJB86
@ATLJB86 10 месяцев назад
I love things that challenge the creative mind!
@sedetweiler
@sedetweiler 10 месяцев назад
Indeed!
@balduron97
@balduron97 5 месяцев назад
Thanks so much for your tutorials! I'm stuck at that epidsode while generating so many beuatiful and funny pictures :D i just got 2 thumbs up - im sure i can generate a picture with more than 10 thumb ups you earn :D
@melondezign
@melondezign 8 месяцев назад
This is really fantastic, thank you for this workflow. And in the end, I miss the integration of Control Net, I tried, in different ways, but ended with a lot of errors... Would you have some hints to add control net in the process, thanks again !
@DanielPartzsch
@DanielPartzsch 9 месяцев назад
Great, thanks for this. Does this only work for SDXL models though? When trying 1.5 models with this setup I had no luck so far to get this to work properly.
@GabiVegas-gz7tb
@GabiVegas-gz7tb 6 месяцев назад
big like ' thanks 🙂
@sedetweiler
@sedetweiler 6 месяцев назад
Thanks for the visit
@Calbefraques
@Calbefraques 10 месяцев назад
This gets the kernels within the images, it's a kind of essence compression
@sedetweiler
@sedetweiler 10 месяцев назад
Yes, that is a nice way to say it.
@ghostsofdetroitunderground6367
@ghostsofdetroitunderground6367 5 месяцев назад
what exactly do you download to get clip vision to work? what file am i looking for to install? thanks
@grahamastor4194
@grahamastor4194 3 месяца назад
Just found your channel and the ComfyUI playlist, working my way through them now. How can I get a text version out of CLIP Vision to see what it gets out of the images? Many thanks.
@aminbehravan
@aminbehravan 9 месяцев назад
amazing and simple, thank you very much, please make a tutorial on how I can make the output and put it in a sequence of images so I can animate them. 🥇🥇
@darkesco
@darkesco 10 месяцев назад
Awesome video. The file list was pytorch models not clip_vision and I put them in ComfyUI\ComfyUI_windows_portable\ComfyUI\models\clip_vision. I'm still new so, just learning rn.
@hleet
@hleet 9 месяцев назад
Very well explained, thank you. By the way, you didn't use the special SDXLClipTextEncode, so it's not mandatory if you don't want to use the refiner ?
@sedetweiler
@sedetweiler 9 месяцев назад
You don't get the benefit of the full clip conditioning or the 2 clip models, but it's fine for demonstration.
@sedetweiler
@sedetweiler 9 месяцев назад
You don't get the benefit of the full clip conditioning or the 2 clip models, but it's fine for demonstration.
@sedetweiler
@sedetweiler 9 месяцев назад
You won't get the benefit of the full clip conditioning or the 2 clip models, but it's fine for demonstration.
@rsunghun
@rsunghun 10 месяцев назад
Thank you for the video. Where is your patrons page? It's not in the description.
@sedetweiler
@sedetweiler 10 месяцев назад
It is part of youtube membership, I don't use Patreon. I might switch back, as this is a complete pain, but it is one site which is nice. ru-vid.com/show-UC9kC4zCxE-i-g4GnB3KhWpAjoin
@dagkjetsa8486
@dagkjetsa8486 2 месяца назад
Great video Scott! Thanks! In case anyone is listening though: when I set things up like in this video, everything works at first, but then, at some point all my images are just B&W. Any idea as to why this is happening?
@coryscott85
@coryscott85 10 месяцев назад
Scott, is there a way to save custom nodes? We change the typical 512x512 parameters to 1024x1024 quite often... I'm sure there are even better examples, but it would be pretty sweet if you could save that as a custom node with 1024 as the default parameters so you don't have to repeat that step every time you work with SDXL.
@sedetweiler
@sedetweiler 10 месяцев назад
Yes, there are methods to set defaults. That and some other node housecleaning tips are coming soon!
@LeKhang98
@LeKhang98 9 месяцев назад
You can use Nested Node Builder to save them as a new node and then load them when you want, also unnest them if needed. Very convenient.
@0A01amir
@0A01amir 10 месяцев назад
Ah like photoshop ai. hope my pc can handel it atleast with sd1.5. Amazing tut like always.
@sedetweiler
@sedetweiler 10 месяцев назад
Thank ya!
@ItsCjhoneycomb
@ItsCjhoneycomb 10 месяцев назад
Thanks for the lesson... Why not just have saved and shared it as a workflow
@sedetweiler
@sedetweiler 10 месяцев назад
I did, but that is in the member posts for sponsors. Gotta feed those that help feed the family! ;-)
@ThoughtFission
@ThoughtFission 10 месяцев назад
Really great stuff Scott. A couple of questions for you. Is there a university level course or program (with exams etc) that allows some sort of certification for this stuff? I really want to switch to this full time but I think some formal training would be useful. And I guess that leads to a second question; where do you look for jobs in this field?
@sedetweiler
@sedetweiler 10 месяцев назад
Naa, this is all new to the world! It's a bit bleeding edge for them.
@ThoughtFission
@ThoughtFission 10 месяцев назад
@@sedetweiler lol, ok, thanks
@jhj6810
@jhj6810 21 день назад
I have an important question. Why does an empty positive prompt does not the same as a conditionigzeroout ???
@AlexDisciple
@AlexDisciple 3 месяца назад
Hey Scott, thanks for this. I'm having an error : Error occurred when executing KSampler: The size of tensor a (1024) must match the size of tensor b (1280) at non-singleton dimension 1 I did use an Image resize node after my two Load Image nodes to make sure they are both a 1024x1024, but no dice. What do you think that could be?
@creartives
@creartives 10 месяцев назад
Just getting some Random junk like most people say in the comment. Also the clip file in the video is diff form the one that is in the link. May be that is the problem, or i am doping something wrong, no idea. Anyway its a great video. Thanks for sharing
@cunningfilms8728
@cunningfilms8728 9 месяцев назад
How would your node tree look for doing Image sequences/batch images in Comfy UI?
@abeerik5591
@abeerik5591 10 месяцев назад
I am getting no result, a blank solid color. What must be the reason? Attaching Screenshot below, please help.
@alexanderalexandrov3972
@alexanderalexandrov3972 10 месяцев назад
Hi, thanks for awesome content. What is save(API Format). How does it works. May i have a link? 🙂
@sedetweiler
@sedetweiler 10 месяцев назад
that is coming soon!
@logankidd4184
@logankidd4184 10 месяцев назад
where is your clip vision model located? I'm in models/clip_vision and having no luck
@logankidd4184
@logankidd4184 10 месяцев назад
got it working, needs to be in the extension models/clip_vision, not the main webui models folder if anyone else has the same problem.
@junejane6655
@junejane6655 8 месяцев назад
Hello, let me ask you a question. The prompt doesn't seem to have any effect on this revision operation, is there any way the prompt can be involved in the image combination?
@sedetweiler
@sedetweiler 8 месяцев назад
Not using this method. However, the IPadapter might be the tool you want. I did a video on that recently as well.
@junejane6655
@junejane6655 8 месяцев назад
@@sedetweiler thank you so much. I love your voice.
@sedetweiler
@sedetweiler 8 месяцев назад
Aww, thank you!
@xq_le1t0r97
@xq_le1t0r97 8 месяцев назад
Could you use XY grids with those 2 parameters? that would be cool
@sedetweiler
@sedetweiler 8 месяцев назад
Hmm, perhaps! I will have to play with it.
@kalisticmodiani2613
@kalisticmodiani2613 10 месяцев назад
you could probably use the negative prompt to get the opposite of an image :D
@sedetweiler
@sedetweiler 10 месяцев назад
Always worth a try!
@ysy69
@ysy69 8 месяцев назад
would you know where can I download the safetensors version of CLIP-ViT-bigG-14-laion2B-39B-b160k model like the one you have ? Also, is there a node that can output the actual caption generated by CLIP ?
@sedetweiler
@sedetweiler 8 месяцев назад
I don't believe there is a safe tensors version.
@ysy69
@ysy69 8 месяцев назад
@@sedetweiler really? it says clip_vision_g.safetensors in your node
@nexusbible1111
@nexusbible1111 9 месяцев назад
Man I wish I could get this to work like yours. Mine just produces garbage. I gave it an angle and a stained glass window and it produces a very pretty sunset. Ordinarly I get really good images, but reading another image seems to be too much for it.
@PodRED
@PodRED 9 месяцев назад
Are you using an XL checkpoint? You might get garbage if you're trying this with 1.5
@m4dbutt3r
@m4dbutt3r 10 месяцев назад
@sedetweiler side question: How do you get reroute to autopopulate the slot name of the reroute node with the name of the incoming signal? Reroute doesn't make anything clearer if its not labeled, and it takes an annoying amount of time to right-click the right circle and type in a slot name
@sedetweiler
@sedetweiler 10 месяцев назад
It's probably related to the manager. It has some additional features like the name on the node, etc.
@jdsguam
@jdsguam 10 месяцев назад
I can not find any way of auto naming the reroute node. Manger didn't help me.
@PodRED
@PodRED 9 месяцев назад
One of the custom node packs has a labelled reroute if all else fails. I think it's pythongosssss' nodes which you can get from the manager. Also includes a lot of other awesome features.
8 месяцев назад
@@jdsguam I just found how. You need to right click on the reroute node and choose " Show Type By Default" and next time the name will be put automatically
@WhatsThisStickyStuff
@WhatsThisStickyStuff 3 месяца назад
Im not sure why but for some reason when I do this it doesnt make any change to my image
@thomasanderson9351
@thomasanderson9351 10 месяцев назад
Is it possible to turn the conditioning back into a prompt?
@sedetweiler
@sedetweiler 10 месяцев назад
Interesting question! I will have to see what is involved.
@PodRED
@PodRED 9 месяцев назад
Is there a compelling reason to grab openclip and the clipvision-g model?
@sedetweiler
@sedetweiler 9 месяцев назад
Yeah, we will use them anytime we want to "query" an image we are uploading into the workflow and using it as a prompt.
@PodRED
@PodRED 9 месяцев назад
@@sedetweiler sorry I meant is there a reason to have both rather than just one?
@robbdeeze
@robbdeeze 10 месяцев назад
Where can i download the photos so i can load the graphs in comfy
@sedetweiler
@sedetweiler 10 месяцев назад
They are posts in RU-vid for channel members at the Sponsor level or higher.
@kpr2
@kpr2 6 месяцев назад
Weird, started from scratch twice now, triple checked everything & yet I'm not getting the same sort of results. My images are generating, but they don't seem to be pulling from or blending the source images, just making something completely new. I'll keep tinkering...
@falk.e
@falk.e 6 месяцев назад
I have the same issue. it doesn`t use the source images, just the model and prompts
@kpr2
@kpr2 6 месяцев назад
It's been a minute since i looked at that work flow, but I know I figured it out eventually @@falk.e :D I'll have a look shortly and let you know if I can remember just what it was I did to solve it. I've learned a lot fiddling with it over the last week or so :)
@imgoingtobuygoogle
@imgoingtobuygoogle 9 месяцев назад
How do you use the Clip model? I downloaded it and it's just a bunch of data. Can't figure out what I'm supposed to do with the files. (Not really a programmer and new to this, btw)
@sedetweiler
@sedetweiler 9 месяцев назад
CLIP was trained to recognize the relationship between photos and words (on a very basic level). You will never look at these files directly.
@imgoingtobuygoogle
@imgoingtobuygoogle 9 месяцев назад
@@sedetweiler Right, but I don't have a single file in the download. It's just a folder with tons of files labeled "data". The other ones (the sdxl files) are just a single file with a .safetensors extension edit: I figured out what the issue was. I extracted the files from the .bin file instead of just leaving it alone... If I just paste the .bin file directly into the proper folder, it works fine. Hopefully this might be helpful for someone who makes the same mistake in the future!
@patheticcoder4081
@patheticcoder4081 6 месяцев назад
Is it normal to get the error "RuntimeError: The size of tensor a (1024) must match the size of tensor b (1280) at non-singleton dimension 1" when using another clip vision model?
@urekmazino2086
@urekmazino2086 5 месяцев назад
You need to resize the source clip
@blacktilebluewall
@blacktilebluewall 3 месяца назад
@@urekmazino2086what is that?
@ReheatedDonut
@ReheatedDonut 3 месяца назад
Does anyone know a solution similar to this for SD1.5? Thanks. *cries in NVidia 1050*
@yodelinggoethe1147
@yodelinggoethe1147 4 месяца назад
I know nothing about this stuff but im trying to piece it all together with videos like these. I'm doing everything like you but I'm getting an error "Error occurred when executing KSampler: The size of tensor a (768) must match the size of tensor b (1280) at non-singleton dimension 1" can anybody help me with that?
@blacktilebluewall
@blacktilebluewall 3 месяца назад
yeah, i got same error
@ollefors3686
@ollefors3686 2 месяца назад
@@blacktilebluewall My solution was to install clip_Vision_g.safetensors. from Manager, and choosing in it Load Clip vision.
@LouisGedo
@LouisGedo 10 месяцев назад
👋
@sedetweiler
@sedetweiler 10 месяцев назад
:-)
@thatguy9528
@thatguy9528 Месяц назад
I have it set up exactly and I get garbage. I don't know what I'm doing wrong.
@Enricii
@Enricii 10 месяцев назад
Unfortunately I'm getting out of memory error because of that clip vision file :(
@sedetweiler
@sedetweiler 10 месяцев назад
You can download a smaller version of it, but I am not sure it is official.
@abdallahalswaiti
@abdallahalswaiti 10 месяцев назад
hi could you please add the nodes map for your videos 🥺
@sedetweiler
@sedetweiler 10 месяцев назад
They are added for the Sponsors of the channel in the youtube posts.
@andu896
@andu896 Месяц назад
You mentioned you use Revision, but nowhere in the video do I see revision.
@gordonbrinkmann
@gordonbrinkmann 10 месяцев назад
I'm sorry I'm completely lost... which of those files under the CLIP link do I need to download and where do I put them?
@sedetweiler
@sedetweiler 10 месяцев назад
It's the link in the description for the clip model. Put it in the clip_vision model folder.
@gordonbrinkmann
@gordonbrinkmann 10 месяцев назад
@@sedetweiler Yes, I've seen the link... but all files? I usually just download checkpoints, there I always just download a single file. So all bin files in this case?
@cosmicstuff44
@cosmicstuff44 10 месяцев назад
@@gordonbrinkmann I'm wondering the same thing... UPDATE... just put the whole bin file in the models/clip_vision folder, that worked here. When launching ComfyUI after that it loaded a lot of stuff. After re-creating the workflow Scott first shows us, the Queue Prompt button did not work. I saved the workflow and closed out of the browser and the CMD window and came back in and it works fab!
@claffert
@claffert 10 месяцев назад
@@sedetweiler In the video you used the model "clip_vision_g.safetensors", though the description link points to "open_clip_pytorch_model.bin". I found the first one myself, and both seem to work. Is there a reason for the change?
@TR-707
@TR-707 10 месяцев назад
this is random af for me and has NOTHING to do with the input images so it feels really lame to use. Setup is like yours. It feels like XL is JUST for making some generic AI art that you see everywhere now *oh but thank you for the tutorial and awesome video + pacing! subbed ** even the included workflows dont work at all. I start with like a picture of a human and end up with a duck in a pond that has 0 connection to the initial pic. *** its the Fooocus Ksampler that was breaking everything - a custom node that replaces certain functions even if it is not used..wow
@MrGingerSir
@MrGingerSir 10 месяцев назад
I'm also getting extremely random images from this. exactly the same setup, completely updated on all fronts... just random stuff like a dog or cars when putting in portraits of people, and nothing close to the style of the images. Did you figure this out, or did you give up?
@sedetweiler
@sedetweiler 10 месяцев назад
I would focus on the strength. Working closer to .6 and .7 seems to work better than aggressive values in there.
@MrGingerSir
@MrGingerSir 10 месяцев назад
@@sedetweiler it had nothing to do with that. for some reason the Fooocus custom node doesn't mix with this, even if you're not using the node in the workflow. It has to be completely uninstalled for this to work.
@fredpourlesintimes
@fredpourlesintimes 9 месяцев назад
Doesn't work at all.
@Bartetmedia
@Bartetmedia 10 месяцев назад
😴
@sedetweiler
@sedetweiler 10 месяцев назад
Didn't like it?
@tannerochel
@tannerochel 10 месяцев назад
I get errors when using "clip_vision_g.safetensors" and "open_clip_pytorch_model.bin" generates mostly random junk, only using similar colors from source images. Not sure what I'm doing wrong, same results when using revision-image_mixing_example.json from HuggingFace.
@TheSmurfboard
@TheSmurfboard 10 месяцев назад
same here, only the manual prompt works. also wondering what the difference between those two files is considering the huge size difference.
@sedetweiler
@sedetweiler 10 месяцев назад
Use the link to get the correct clip from 🤗. You probably have the wrong clip model.
@JLocust
@JLocust 10 месяцев назад
I had the same issue with random junk. It's not the clip vision model, you need to update your version of ComfyUI
@TheSmurfboard
@TheSmurfboard 10 месяцев назад
yep! thanks@@JLocust
@PaulFidika
@PaulFidika 10 месяцев назад
@@JLocust I had the same issue; clip_vision_g loading errors or just outputting random nonsense images. Updating ComfyUI fixed it, thanks.
@beveresmoor
@beveresmoor 10 месяцев назад
Sadly I got an error message instead 😂 The error message said something about size mismatch for vision_model.embeddings.position_embedding.weight , and it is very long too.
@MitrichDX
@MitrichDX 10 месяцев назад
where you get "ConditionZeroOut?"
@sedetweiler
@sedetweiler 10 месяцев назад
It should be in the standard nodes. I search for it
@MitrichDX
@MitrichDX 10 месяцев назад
@@sedetweiler I don’t have one, moreover, there is such an Image Resize node at a certain point in time I also had it, but then it disappeared somewhere and I can’t imagine how to return it .. Can you make a lesson on how to return standard nodes?
@sedetweiler
@sedetweiler 10 месяцев назад
I would just do a reinstall. It's probably good if things are wacky.
Далее
Ummmm We "HAIR" You!
00:59
Просмотров 14 млн
Secrets to Creating Stunning AI Images: Expert Prompts
6:04
Revealing my Workflow to Perfect AI Images.
13:31
Просмотров 311 тыс.
SDXL ComfyUI InstantID: Just The Basics
12:38
Просмотров 1,3 тыс.