Тёмный

CONSISTENT VID2VID WITH ANIMATEDIFF AND COMFYUI 

enigmatic_e
Подписаться 35 тыс.
Просмотров 34 тыс.
50% 1

Get 4 FREE MONTHS of NordVPN: nordvpn.com/enigmatic
Topaz Labs Affiliate: topazlabs.com/ref/2377/
ComfyUI and AnimateDiff Tutorial on consistency in VID2VID.
HOW TO SUPPORT MY CHANNEL
-Support me by joining my Patreon: / enigmatic_e
_________________________________________________________________________
SOCIAL MEDIA
-Join my discord: / discord
-Twitch: / 8bit_e
-Instagram: / enigmatic_e
-Tik Tok: / enigmatic_e
-Twitter: / 8bit_e
- Business Contact: esolomedia@gmail.com
________________________________________________________________________
My PC Specs
GPU: RTX 4090
CPU: 13th Gen Intel(R) Core(TM) i9-13900KF
MEMORY: CORSAIR VENGEANCE 64 GB
Stabilized Models: huggingface.co/manshoety/AD_S...
My Workflow:
drive.google.com/file/d/1Ph3S...
IP-ADAPTER MODELS:
huggingface.co/h94/IP-Adapter...
CLIP VISION MODELS:
huggingface.co/openai/clip-vi...
huggingface.co/comfyanonymous...
Folders
Clip model goes in the Comfyui/models/clip_vision folder
IPAdapter model goes in the comfyui/custom_nodes/comfyui_ipadapter_plus/models folder
0:00 Intro
0:36 Nord VPN
1:16 Worflow Setup
5:16 FreeU
06:59 IPadapter
10:59 Face Restore
11:45 Controlnets
13:45 Generate
17:21 Upscaling

Развлечения

Опубликовано:

 

20 июн 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 134   
@enigmatic_e
@enigmatic_e 7 месяцев назад
Get 4 FREE MONTHS of NordVPN: nordvpn.com/enigmatic
@BrandonFoy
@BrandonFoy 7 месяцев назад
Thanks for making the time to explain and share your workflow, dude! Super appreciate it!
@FCCEO
@FCCEO 5 месяцев назад
dude! this is exactly what I have been looking for! I love the way you explain and the stuff you are covering. Thank you so much for sharing this valuable info. Subscribed right away!
@reyniss
@reyniss 5 месяцев назад
Great stuff, super helpful, finally got to where I wanted in ComfyUI and vid2vid thanks to you!
@RealitySlipTV
@RealitySlipTV 7 месяцев назад
Results look great. So many softwares, so little time. Workflow looks nice. Looks like I'll need to deep dive on this at some point.
@conniehe2912
@conniehe2912 3 месяца назад
Wow great work flow! Thanks for sharing!
@wpahp
@wpahp 6 месяцев назад
When I try to open your workflow I get a bunch of missing nodes, how do I install/add those on a mac? :/ thanks ControlNetLoaderAdvanced CheckpointLoaderSimpleWithNoiseSelect OpenposePreprocessor VHS_VideoCombine LeReS-DepthMapPreprocessor ADE_AnimateDiffLoaderV1Advanced IPAdapterModelLoader IPAdapterApply PrepImageForClipVision HEDPreprocessor FaceRestoreModelLoader FaceRestoreCFWithModel VHS_LoadVideo Integer
@user-nk4ov2xh4h
@user-nk4ov2xh4h 7 месяцев назад
Dude, you’re the best! Glad to see you have more sponsors and advertisers :) All the best to you and your channel 💪
@enigmatic_e
@enigmatic_e 7 месяцев назад
🙏🏽🙏🏽🙏🏽
@simonzapata1636
@simonzapata1636 7 месяцев назад
Your videos are so helpful. Thank you for sharing your knowledge with us. Gracias!
@enigmatic_e
@enigmatic_e 7 месяцев назад
De nada 👍🏽
@petersvideofile
@petersvideofile 7 месяцев назад
Awesome video, thanks so much!
@mynameisChesto
@mynameisChesto 6 месяцев назад
I cannot wait to start playing around with this. Putting together a PC build for this reason.
@StillnessMoving
@StillnessMoving 7 месяцев назад
Hell yeah, this is amazing!
@ysy69
@ysy69 4 месяца назад
Great tutorial, thank you
@Distop-IA
@Distop-IA 6 месяцев назад
amazing stuff!
@GoodArt
@GoodArt 6 месяцев назад
Thanks dude, you rule. Best tute out.
@Lorentz_Factor
@Lorentz_Factor 6 месяцев назад
You can also skip the LoRas by selecting and CTRL-B, it'll send the signal without the LoRa Loader processing its execution step
@FullOfHabits
@FullOfHabits 3 месяца назад
i love you. thank you so much
@digital_magic
@digital_magic 7 месяцев назад
Awesome great Video,..learned a lot 🙂
@graylife_
@graylife_ 7 месяцев назад
great work man, thank you
@enigmatic_e
@enigmatic_e 7 месяцев назад
👍🏽 no Problem.
@JoeMultimedia
@JoeMultimedia 7 месяцев назад
Amazing, thanks a lot.
@enigmatic_e
@enigmatic_e 7 месяцев назад
No problem!
@VairalKE
@VairalKE 2 месяца назад
I liked MDMZ ... till I found this channel. Lovely work. Keep it up.
@user-uv4qe2zs1z
@user-uv4qe2zs1z 5 месяцев назад
thank you
@andyguo554
@andyguo554 7 месяцев назад
Great video! Could you also share the input video and images? Thanks a lot.
@user-jl4ps7qw4p
@user-jl4ps7qw4p 7 месяцев назад
amazing
@esferitasoy
@esferitasoy 6 месяцев назад
thx
@MultiBraner
@MultiBraner 6 месяцев назад
subscribed
@mauriciogianelli1573
@mauriciogianelli1573 7 месяцев назад
Is there a way to see a frame of Ksampler progress before ending? I mean, in A1111 you could enter to output and see batch progress before ending. Thanks!
@keepitshort4208
@keepitshort4208 5 месяцев назад
What's the learning curve for comfy ui or someone you recommend that teaches comfy ui
@risasgrabadas3663
@risasgrabadas3663 6 месяцев назад
What is the folder where the FaceRestoreModelLoader node models are placed?
@abovethevoid653
@abovethevoid653 7 месяцев назад
In the video, your OpenPose preprocessor (titled "DWPose Estimation") has more options than in the workflow, where it's called "OpenPose Pose Recognition", and doesn't have the bbox_detector and post_estimator options. Did you get that preprocessor from a custom node?
@enigmatic_e
@enigmatic_e 7 месяцев назад
Does the workflow not have DWPose?
@abovethevoid653
@abovethevoid653 7 месяцев назад
@@enigmatic_e It does, but it's not the same node I think. The one in the video has more options.
@enigmatic_e
@enigmatic_e 7 месяцев назад
@@abovethevoid653 mmm not sure why it’s different. I wonder if it’s an updated or outdated version maybe.
@nicocro00
@nicocro00 5 месяцев назад
where do you run your workflows? do you use your own desktop with what GPUs? or services like runpod etc.?
@enigmatic_e
@enigmatic_e 5 месяцев назад
I use my own gpu.
@joselitogonzalezgeraldo3286
@joselitogonzalezgeraldo3286 6 месяцев назад
@themightyflog
@themightyflog 5 месяцев назад
Part 2 please.
@Sergatx
@Sergatx 7 месяцев назад
Chingon. I'm noticing more and more people diving into comfyui.
@enigmatic_e
@enigmatic_e 7 месяцев назад
Yeah I feel it’s the best thing at the moment.
@maxpaynestory
@maxpaynestory 6 месяцев назад
Only rich people with expensive GPUs are diving into it.
@SadPanda449
@SadPanda449 6 месяцев назад
Where do you get the safetensors model for CLIP Vision on your IPAdapter? I can't seem to get it to work. Thanks for this video! It's helped a ton.
@enigmatic_e
@enigmatic_e 6 месяцев назад
I just included it in the description. Sorry about that.
@SadPanda449
@SadPanda449 6 месяцев назад
Ahhh! You're the best. Thank you!@@enigmatic_e
@SadPanda449
@SadPanda449 6 месяцев назад
@@enigmatic_e Have you gotten this before with IPAdapter? I'm thinking my issue isn't CLIP Vision related now, but thank you so much for adding the file to the description! Error occurred when executing IPAdapterApply: Error(s) in loading state_dict for ImageProjModel: size mismatch for proj.weight: copying a param with shape torch.Size([3072, 1024]) from checkpoint, the shape in current model is torch.Size([3072, 1280]).
@louprgb5711
@louprgb5711 6 месяцев назад
@@SadPanda449 Hey got the same problem, did you find the solution? Thanks
@Kontaktfilms
@Kontaktfilms 4 месяца назад
@@enigmatic_e I'm getting the same error as SadPanda... Error(s) in loading state_dict for ImageProjModel: size mismatch for proj.weight: copying a param with shape torch.Size([3072, 1024]) from checkpoint, the shape in current model is torch.Size([3072, 768]). Any way to fix this EniE?
@zensack7310
@zensack7310 6 месяцев назад
Hello thanks for de video, I have been fighting with this for several days, I removed the background of the character leaving a black background, then I created HED and Openpose, both were perfectly backgroundless, I also added ip2p, when creating the video the character appears perfect but the background is dark with stripes and lights which have nothing to do with the prompt, I want to do it outdoors with sunlight but it is dark like a dark room. (If I bypass animatediff it makes the exact image at the prompt I'm writing.)
@enigmatic_e
@enigmatic_e 6 месяцев назад
You will always have stuff generate in the background. You need to use some external software to remove the background with like a mask or rotoscoping.
@theindiephotographs
@theindiephotographs 7 месяцев назад
Best in Bizz
@enigmatic_e
@enigmatic_e 7 месяцев назад
I appreciate that. 🙏🏽🙏🏽
@zensack7310
@zensack7310 6 месяцев назад
By the way, I see that everyone places the width and height in external nodes? I change the values directly in the upscale node, does that alter anything?
@enigmatic_e
@enigmatic_e 6 месяцев назад
I would just try it the way you have it setup and see how it looks.
@fanyang2492
@fanyang2492 Месяц назад
Could you explain why you set resolution to 704 in the node of HED Soft-Edge Lines?
@enigmatic_e
@enigmatic_e Месяц назад
I might have been playing around with parameters, it makes the hed resolution higher but can’t remember if it makes much of a difference.
@lei.1.6
@lei.1.6 6 месяцев назад
Hey, I get this error and i've been trying my best to troubleshoot to no avail : Error occurred when executing KSampler: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument mat1 in method wrapper_CUDA_addmm)
@sassy-penguin
@sassy-penguin 7 месяцев назад
Quick note - I installed the IP Adapter models directly from Comfy, and it put it in the wrong folder. I found the folder and moved the contents over, then it worked. Overall - phenomenal work, I am running the flow right now. Does anyone know what CRF does? it doesn't seem to be affecting anything.
@enigmatic_e
@enigmatic_e 7 месяцев назад
🙏🏽
@batuhansardas3651
@batuhansardas3651 Месяц назад
Thanks for the tutorial but How can I find "apply IPadapter". I tried for load but I couldn't find it.
@What-If12
@What-If12 Месяц назад
You can replace "Apply IPadapter" with "IPadapter Advance" node.
@danielo9827
@danielo9827 6 месяцев назад
I've found that when you're trying to replace a subject with a specific style (vid2vid), using Controlnet ip2p helps with the style transfer, whether it's from IP Adapter, or LORA. I have a question, about something I haven't tried yet: it would seem that you can use IP Adapter to influence an image's background. Would that be possible in vid2vid?
@user-sd1yy7rn7g
@user-sd1yy7rn7g 6 месяцев назад
Is there a way to process 2-3 minute videos? like anything more than 150-200 frames crashes my comfyui. Like is there a way to do in batches maybe? Im already using low aspect ratio and everythhing.
@enigmatic_e
@enigmatic_e 6 месяцев назад
have you tried using lcm?
@the_one_and_carpool
@the_one_and_carpool 5 месяцев назад
set the load image cap to 150 on first run then on second set the load image cap to 300 run skip the first 150 then skip first 300 skip ect or break your images up in multiple folders and run each folder
@Disco_Tek
@Disco_Tek 6 месяцев назад
Anyone know a controlnet to prevent colorshift on something like clothing with vid2vid?
@Freezasama
@Freezasama 6 месяцев назад
Which model is that on Load Clip model? "model.safetensors" !? where to get it?
@enigmatic_e
@enigmatic_e 6 месяцев назад
You could probably google it. I think that’s I found a few by searching load clip models for comfyui or something like that.
@MisterCozyMelodies
@MisterCozyMelodies 28 дней назад
there is a problem these days, when you update the ipadapter, so this workflow doesn`t work anymore with the workflow, do you know how to fix or get the new workflow with updated ipadapter?
@MisterCozyMelodies
@MisterCozyMelodies 28 дней назад
never mind!! i follow some of the comments here and found the answer, works great, nice tuto, thanks a lot
@enigmatic_e
@enigmatic_e 27 дней назад
I’ll try to upload an updated version today
@williambal9392
@williambal9392 4 месяца назад
I have that error : Error occurred when executing IPAdapterApplyFaceID: InsightFace: No face detected. Any solutions please ? :)
@wagmi614
@wagmi614 Месяц назад
any new workflow?
@Gardener7
@Gardener7 3 месяца назад
Does animate diff work with SDXL sizes?
@enigmatic_e
@enigmatic_e 2 месяца назад
it does but sdxl doesnt give the best results at the moment.
@kleber1983
@kleber1983 6 месяцев назад
My animatediff loader is not working, it doesn´t recognize the mm-Stabilized_high.pth that is in the proper folder...
@enigmatic_e
@enigmatic_e 6 месяцев назад
There might be another folder you have to put it. Do you have more than 1 animated diff folder in your custom node folder?
@leo.leon__
@leo.leon__ 6 месяцев назад
How was the 3D video you are uploading created?
@enigmatic_e
@enigmatic_e 6 месяцев назад
www.mixamo.com/
@leo.leon__
@leo.leon__ 6 месяцев назад
@@enigmatic_e thanks
@macadonards1100
@macadonards1100 4 месяца назад
will this work with 11gb of vram?
@knicement
@knicement 7 месяцев назад
What PC Specs do you use?
@enigmatic_e
@enigmatic_e 7 месяцев назад
I have an RTX 4090. Sorry I will make sure to put that information on my videos from now on. Thank you!
@knicement
@knicement 7 месяцев назад
@@enigmatic_e thank you
@hartdr8074
@hartdr8074 6 месяцев назад
Could you adjust your videos volume to be even lower so that only ants can hear it? THanks
@enigmatic_e
@enigmatic_e 6 месяцев назад
I’m recording at a standard volume for video. I try to peak at -6db averaging -12db
@shyvanatop4777
@shyvanatop4777 7 месяцев назад
I am so confused, I am getting this error: Error occurred when executing IPAdapterApply: Error(s) in loading state_dict for ImageProjModel: size mismatch for proj.weight: copying a param with shape torch.Size([3072, 1024]) from checkpoint, the shape in current model is torch.Size([3072, 768]). Any idea on how to fix this?
@enigmatic_e
@enigmatic_e 7 месяцев назад
Hmm are you missing model for ipadapter?
@shyvanatop4777
@shyvanatop4777 7 месяцев назад
@@enigmatic_e had the wrong model! its solved now. ty
@RickyMarchant
@RickyMarchant 6 месяцев назад
I have the same issue, i have the model shown in the video. Do you think the CLIP model is causing this? I cant find that one, so I am using clipg
@risasgrabadas3663
@risasgrabadas3663 6 месяцев назад
I have the same problem for both models, investigating...
@luciogiolli
@luciogiolli 6 месяцев назад
same here
@choboruin
@choboruin 3 месяца назад
Swear u gotta be a genius to understand this stuff lol
@zorilov_ai
@zorilov_ai 7 месяцев назад
nice, thanks.
@enigmatic_e
@enigmatic_e 7 месяцев назад
👍🏽👍🏽
@aarvndh5419
@aarvndh5419 7 месяцев назад
Can I do this on stable diffusion
@enigmatic_e
@enigmatic_e 7 месяцев назад
Do you mean in automatic 1111 the webui? If so I would say that it's more limited than comfyui. ComfyUI allows for way more customization.
@aarvndh5419
@aarvndh5419 7 месяцев назад
@@enigmatic_e okay I'll try on ComfyUI
@attentiondeficitdisorder
@attentiondeficitdisorder 7 месяцев назад
That UI isn't looking so comfy anymore. How the hell are people keeping track of all these nodes 0.0
@enigmatic_e
@enigmatic_e 7 месяцев назад
😂😂 I guess it takes some getting used to.
@attentiondeficitdisorder
@attentiondeficitdisorder 7 месяцев назад
Node editor spaghetti is my kryptonite, I commend anyone able to keep track. You also can't argue with the results. Probably the best consistency I have seen yet. Good stuff!@@enigmatic_e
@AIPixelFusion
@AIPixelFusion 7 месяцев назад
The workflow sharing and reuse is comfy AF tho!!
@sebaccimaster
@sebaccimaster 6 месяцев назад
Its called continuously learning. If it sounds like hard work its because it is 😅…
@blender_wiki
@blender_wiki 6 месяцев назад
Is even not that a big workflow 🤷🏿‍♀️
@theairchitect
@theairchitect 7 месяцев назад
as young people say... first! _o/ 😅
@enigmatic_e
@enigmatic_e 7 месяцев назад
🎉🎉🎉
@portl3582
@portl3582 2 месяца назад
When it gets to ControlNet, it seems that the DWpose Estimation node is not available? I also get this message: Error occurred when executing LeReS-DepthMapPreprocessor: LeresDetector.from_pretrained() missing 1 required positional argument: 'pretrained_model_or_path' File "C:\----COMFY-UI-APPS+FILES\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\----COMFY-UI-APPS+FILES\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\----COMFY-UI-APPS+FILES\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\----COMFY-UI-APPS+FILES\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux ode_wrappers\leres.py", line 21, in execute model = LeresDetector.from_pretrained().to(model_management.get_torch_device()) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@musyc1009
@musyc1009 4 месяца назад
anyone got an error at the KSampler part ? Error occurred when executing KSampler: Unknown context_schedule 'uniform'. File "D:\ComfyUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 152, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 82, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 75, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI\ComfyUI_windows_portable\ComfyUI odes.py", line 1375, in sample return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI\ComfyUI_windows_portable\ComfyUI odes.py", line 1345, in common_ksampler samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 346, in motion_sample latents = wrap_function_to_inject_xformers_bug_info(orig_comfy_sample)(model, noise, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\utils_model.py", line 360, in wrapped_function return function_to_wrap(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI\ComfyUI_windows_portable\ComfyUI\comfy\sample.py", line 100, in sample samples = sampler.sample(noise, positive_copy, negative_copy, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 713, in sample return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 618, in sample samples = sampler.sample(model_wrap, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 557, in sample samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI\ComfyUI_windows_portable\ComfyUI\comfy\k_diffusion\sampling.py", line 154, in sample_euler_ancestral denoised = model(x, sigmas[i] * s_in, **extra_args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch n\modules\module.py", line 1511, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch n\modules\module.py", line 1520, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 281, in forward out = self.inner_model(x, sigma, cond=cond, uncond=uncond, cond_scale=cond_scale, model_options=model_options, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch n\modules\module.py", line 1511, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch n\modules\module.py", line 1520, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 271, in forward return self.apply_model(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 268, in apply_model out = sampling_function(self.inner_model, x, timestep, uncond, cond, cond_scale, model_options=model_options, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 385, in evolved_sampling_function cond_pred, uncond_pred = sliding_calc_cond_uncond_batch(model, cond, uncond_, x, timestep, model_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 461, in sliding_calc_cond_uncond_batch context_windows = get_context_windows(ADGS.params.full_length, ADGS.params.context_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\context.py", line 296, in get_context_windows raise ValueError(f"Unknown context_schedule '{opts.context_schedule}'.")
@tasticad58
@tasticad58 4 месяца назад
I've got the same error (both on macOS and windows) Have you found how to solve it by any chance..?
@jonathanbeaton6984
@jonathanbeaton6984 3 месяца назад
Same here! Any luck figuring it out?
@enigmatic_e
@enigmatic_e 3 месяца назад
fixed it and updated link, check description
@enigmatic_e
@enigmatic_e 3 месяца назад
fixed it and updated link, check description
@musyc1009
@musyc1009 3 месяца назад
thanks for fiixing it bro ! @@enigmatic_e
Далее
3D+ AI (Part 2) - Using ComfyUI and AnimateDiff
11:11
ToonCrafter - This is only the beginning!
14:15
Просмотров 6 тыс.
Comfyui AnimateDiffv3 RGB image Sparse Control
2:27
Luma Dream Machine is CRAZY!
6:25
Просмотров 2,7 тыс.
AnimateDiff and (Automatic 1111) for Beginners
6:49
Просмотров 49 тыс.
AI music is getting good!! Udio vs. Suno
12:11
Просмотров 8 тыс.
The Ultimate Guide to Gleam Concurrency
13:36
Просмотров 11 тыс.
Никто не сможет поймать...
0:42