Тёмный

ComfyUI IPAdapter V2 style transfer workflow automation  

PixelEasel
Подписаться 7 тыс.
Просмотров 8 тыс.
50% 1

Comfy-UI IPAdapter update
@latentvision, huge thanks for matteo - the creator!!!
In this video we will see how you can transfer a style from one image to other images with the help of the ipadapter v2, since we will want the style mainly on a background, we will also work with masks. And in addition we will see how the entire workflow can be automated
#comfyui #stablediffusion #ipadapter #mask #automation
follow me @ / pixeleasel
workflow
drive.google.c...
juggernaut model
civitai.com/mo...
IPAdapter GitHub
github.com/cub...

Опубликовано:

 

27 сен 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 52   
@knightride9635
@knightride9635 5 месяцев назад
Subscribed! Nice workflow, thank you
@PixelEasel
@PixelEasel 5 месяцев назад
thanks!
@cuocsongtuoidep91
@cuocsongtuoidep91 3 месяца назад
Nice tutorial, Thank you.
@PixelEasel
@PixelEasel 3 месяца назад
nice comment!
@ronnykhalil
@ronnykhalil 5 месяцев назад
thank you! super helpful and just what i was looking for
@PixelEasel
@PixelEasel 5 месяцев назад
🥳 nice to know! thanks for the comment!
@pto2k
@pto2k 5 месяцев назад
Great workflow idea. Thanks. I had that same error with BatchCLIPSeg others mentioned. I found that it can be replaced with the CLIPSeg Masking (WAS Node Suite) node + ToBinaryMask (ImpactPack) node. That functions the same I think.
@PixelEasel
@PixelEasel 5 месяцев назад
thanks a lot! I'll check it
@jorgvollmer5971
@jorgvollmer5971 5 месяцев назад
The same here, it works well with the Clipseg Masking.
@FeyaElena
@FeyaElena 5 месяцев назад
Thanks for the informative video! I am getting There was an error while executing BatchCLIPSeg: The input and output should have the same number of spatial dimensions, but the input received a tensor with spatial dimensions [1, 352, 352] and the output dimension is (832, 1216). Please provide the input tensor in the format (N, C, d1, d2, ...,dK) and the output dimension in the format (o1, o2, ...,oK). Can you tell me what I need to fix?
@PixelEasel
@PixelEasel 5 месяцев назад
did you change anything? upload an image?
@jomiller7332
@jomiller7332 5 месяцев назад
Yep I get the same message , must be something with BatchClipseg , because without that Node its fine
@3dpixelhouse
@3dpixelhouse 5 месяцев назад
Fantastic workflow! But... where can I find the Batchclip-Node? The Manager dosn't find it.
@PixelEasel
@PixelEasel 5 месяцев назад
thanks! here's the link github.com/kijai/ComfyUI-KJNodes
@ZergRadio
@ZergRadio 5 месяцев назад
at 1:52 where do I get this 1.5 model XL.safetensor and where do I put it? When I have the Load CLIP Vision up and use the pull down menu there is no model XL
@PixelEasel
@PixelEasel 5 месяцев назад
you need to put the clip vision models in this root ComfyUI\models\clip_vision an here you can find all the needed model
@ZergRadio
@ZergRadio 5 месяцев назад
@@PixelEasel Thanks for the reply. I have 4 clip models in the right folder. But my other question is where do I get the "model" called model.safetensors from? As I do not have a model by that name in the root folder. So where can I download that specific model which you show at 1:52. I see that there are two models names model XLsafetensors and model.safetensors. I do not have them in my root folder.
@AlexDisciple
@AlexDisciple 5 месяцев назад
Another question, what does "Stop_at_clip_layer -1" do?
@PixelEasel
@PixelEasel 5 месяцев назад
it's set the last layer for the diffusion process - in other words, it doesn't skip any layer of the clip. it is a parameter that is provided by the creator of the checkpoint
@AlexDisciple
@AlexDisciple 5 месяцев назад
Thanks for this. I get the same error with BatchClipSeg "Input and output must have the same number of spatial dimensions, but got input with spatial dimensions of [352] and output sise [832,1216]. Please provide input tensir in (N, C, d1, d2, ....,dk) format and output size in 9o1,o2, ...,oK) format"
@PixelEasel
@PixelEasel 5 месяцев назад
I received a few more responses from people who were unable to work with this node. I still haven't found an answer to this, unfortunately
@Gifttv204
@Gifttv204 5 месяцев назад
Please make video on bg colour matching workflow ❤
@PixelEasel
@PixelEasel 5 месяцев назад
what do you mean? how to match colors for new background?
@Gifttv204
@Gifttv204 5 месяцев назад
@@PixelEasel yes
@PixelEasel
@PixelEasel 5 месяцев назад
I will...
@RuinDweller
@RuinDweller 2 месяца назад
"Error occurred when executing IPAdapterTiled: Error(s) in loading state_dict for Resampler: size mismatch for proj_in.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([1280, 1024])." :( I've been fighting with this for a week now, can you please help me? I've gotten it all the way to the end of the workflow, but i get that error at the "IPAdapter Tiled" node.
@PixelEasel
@PixelEasel 2 месяца назад
try to delete the node from the workflow, and bring it back... sometimes its solve the problem
@JoyHub_TV
@JoyHub_TV 5 месяцев назад
In the "IPAdapter Tiled" node I don't have the weight option for "Style Transfer (SDXL)" but i have "style transfer" and "style transfer strong." how do i get the "Style Transfer (SDXL)" option?
@PixelEasel
@PixelEasel 5 месяцев назад
it's probably because you have the latest version which is good. you can try both
@naderreda8523
@naderreda8523 5 месяцев назад
unfortunately i got the same error , Please provide input tensor in (N, C, d1, d2, ...,dK) , any updates ?
@PixelEasel
@PixelEasel 5 месяцев назад
which node is turning red?
@AlistairKarim
@AlistairKarim 5 месяцев назад
Short and on point, real awesome tutorial. Thanks for sharing!
@PixelEasel
@PixelEasel 5 месяцев назад
thanks for commenting!!
@35wangfeng
@35wangfeng 5 месяцев назад
Awesome work! Thanks for sharing!
@PixelEasel
@PixelEasel 5 месяцев назад
thanks for commenting!
@DJVARAO
@DJVARAO 5 месяцев назад
Awesome tutorial! Thank you!😁
@PixelEasel
@PixelEasel 5 месяцев назад
thanks! awesome comment 😊
@typogram4297
@typogram4297 5 месяцев назад
Great tutorial. Thank you.
@PixelEasel
@PixelEasel 5 месяцев назад
thx!!!
@pedroquintanilla
@pedroquintanilla 5 месяцев назад
problem Error occurred when executing BatchCLIPSeg: Input and output must have the same number of spatial dimensions, but got input with spatial dimensions of [1, 352, 352] and output size of (832, 1216). Please provide input tensor in (N, C, d1, d2, ...,dK) format and output size in (o1, o2, ...,oK) format. File "F:\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-KJNodes odes.py", line 2257, in segment_image resized_tensor = F.interpolate(tensor, size=(height, width), mode='bilinear', align_corners=False) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch n\functional.py", line 3934, in interpolate raise ValueError(
@PixelEasel
@PixelEasel 5 месяцев назад
did you change something?
@ojciecvaader9279
@ojciecvaader9279 5 месяцев назад
I have the same problem. Changed: Im not sure if I have correct clip vision model. I have updated IPadapter lately and those clip vision models became obsolete
@PixelEasel
@PixelEasel 5 месяцев назад
so maybe try to reinstall..
@ideasinspiration8231
@ideasinspiration8231 5 месяцев назад
Thanks for this amazing tutorial! I am running into error: Following your workflow once i run it I get error on BatchCLIPSeg: Error occurred when executing BatchCLIPSeg: Input and output must have the same number of spatial dimensions, but got input with spatial dimensions of [1, 352, 352] and output size of (832, 1216). Please provide input tensor in (N, C, d1, d2, ...,dK) format and output size in (o1, o2, ...,oK) format. Any idea how to fix?
@PixelEasel
@PixelEasel 5 месяцев назад
thanks! you need to check the size of the image you connect
@ideasinspiration8231
@ideasinspiration8231 5 месяцев назад
@@PixelEasel I am using the workflow I download from you. and the size is the image you have set in the workflow.
@pedroquintanilla
@pedroquintanilla 5 месяцев назад
@@PixelEasel
@pedroquintanilla
@pedroquintanilla 5 месяцев назад
El mismo problema
@PixelEasel
@PixelEasel 5 месяцев назад
weird . I'll check if I can figure out why
@hmmrm
@hmmrm 5 месяцев назад
Thanks for sharing!
@PixelEasel
@PixelEasel 5 месяцев назад
thank for commenting 😊
Далее
FLUX and ControlNet for Consistent Character Creation
12:27
Это ваши Патрики ?
00:33
Просмотров 34 тыс.
ControlNet Union for SDXL - one Model for everything
4:45
Ultimate Guide to IPAdapter on comfyUI
30:53
Просмотров 25 тыс.
IPAdapter for Flux - with ComfyUI Workflow
10:36
Просмотров 29 тыс.
All new Attention Masking nodes
10:13
Просмотров 27 тыс.
ComfyUI Multi ID Masking With IPADAPTER Workflow
12:31