Тёмный

ComfyUI: Master Morphing Videos with Plug-and-Play AnimateDiff Workflow (Tutorial) 

Abe aTech
Подписаться 753
Просмотров 16 тыс.
50% 1

Push your creative boundaries with ComfyUI using a free plug and play workflow! Generate captivating loops, eye-catching intros, and more! This free and powerful tool is perfect for creators of all levels.
Chapters:
00:00 Sample Morphing Videos
01:15 Downloads
02:09 Folder locations
02:14 Workflow Overview
04:10 Generating first Morph
04:40 Running the Workflow
04:47 Quick bonus tips
06:35 Supercharge the Workflow
08:58 Getting more variation in batches
10:31 Scaling up
10:59 Scaling up with model
11:35 This is pretty cool
I'll show you how to make morphing videos and use images to create stunning animations and videos,
You'll also learn how to use text prompts to morph between anything you can imagine!
Plus there are some valuable tips and tricks to streamline the comfyui morphing video workflow and save time while creating your own mind-bending visuals.
#########
Links:
########
Workflow: Morpheus Modified workflow for text to image to video
openart.ai/workflows/abeatech...
Tutorial for Batch Generating Text to Image using external text file:
• ComfyUI: Batch Generat...
Workflow: ipiv's Morph - img2vid AnimateDiff LCM:
civitai.com/models/372584?mod...
Note: See 02:09 of the video for Model folder locations
AnimateDiff:
huggingface.co/wangfuyun/Anim...
VAE:
huggingface.co/stabilityai/sd...
AnimateLCM LORA:
huggingface.co/wangfuyun/Anim...
Clip Vision Model ViT-H:
CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors download and rename:
huggingface.co/h94/IP-Adapter...
Clip Vision Model ViT-G:
CLIP-ViT-bigG-14-laion2B-39B-b160k.safetensors download and rename -
huggingface.co/h94/IP-Adapter...
IPADAPTER MODEL:
huggingface.co/h94/IP-Adapter...
Control Net (QRCode):
huggingface.co/monster-labs/c...
Motions Animations for AnimateDiff: civitai.com/posts/2011230
################
Music: Bensound.com/royalty-free-music
License code: LU8J6ZAOXHXNOAI4

Наука

Опубликовано:

 

19 июн 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 98   
@ted328
@ted328 2 месяца назад
Literally the answer to my prayers, have been looking for exactly this for MONTHS
@alessandrogiusti1949
@alessandrogiusti1949 Месяц назад
After following many tutorial, you are the only one gettin to me the results in a very clear way. Thank you so much!
@SylvainSangla
@SylvainSangla Месяц назад
Thanks a lot for sharing this, very precise and complete guide ! 🥰 Cheers from France !
@AlvaroFCelis
@AlvaroFCelis Месяц назад
Thank you so much! Very clear, and organized. Subbed..
@MSigh
@MSigh Месяц назад
Excellent! 👍👍👍
@popo-fd3fr
@popo-fd3fr Месяц назад
Thanks man. I just subscribed
@mcqx4
@mcqx4 2 месяца назад
Nice tutorial, thanks!
@abeatech
@abeatech 2 месяца назад
Glad it was helpful!
@velvetjones8634
@velvetjones8634 2 месяца назад
Very helpful, thanks!
@abeatech
@abeatech 2 месяца назад
Glad it was helpful!
@TechWithHabbz
@TechWithHabbz 2 месяца назад
You about to blow up bro. Keep it going. Btw, I was subscriber #48 😁
@abeatech
@abeatech 2 месяца назад
Thanks for the sub!
@zarone9270
@zarone9270 Месяц назад
thx Abe!
@SF8008
@SF8008 Месяц назад
Amazing! Thanks a lot for this!!! btw - which nodes do I need to disable in order to get back to the original flow? (the one that is based only on input images and not on prompts)
@MACH_SDQ
@MACH_SDQ Месяц назад
Goooooood
@gorkemtekdal
@gorkemtekdal 2 месяца назад
Great video! I want to ask that can we use init image for this workflow like we do on Deforum? I need the video starts with a specific image on the first frame of the video, then it should changes through the prompts. Do you know how does it possible on ComfyUI / AnimateDiff? Thank you!
@abeatech
@abeatech 2 месяца назад
I haven't personally used deforum, but it sounds like its the same concept. This workflow uses 4 init images at different points during the 96 frames to guide the animation. The ipadapter and control net nodes do most of the heavy lifting so prompts aren't really needed, but i've used them to fine tune outputs. I'd encourage you to try it out and see if it gives you the results you're looking for.
@MariusBLid
@MariusBLid 2 месяца назад
Great stuff man! Thank you 😀what are your specs btw? I only have 8gb vram
@paluruba
@paluruba Месяц назад
Thank you for this video! Any idea what to do when the videos are blurry?
@jesseybijl2104
@jesseybijl2104 Месяц назад
Same here, any answer?
@Injaznito1
@Injaznito1 Месяц назад
NICE! I tried and it works great. Thanx for the tut! Question though. I tried changing the 96 to a larger number so the changes between pictures takes a bit longer but I don't see any difference. Is there something I'm missing? Thanx!
@user-yo8pw8wd3z
@user-yo8pw8wd3z Месяц назад
good video. where can i find the link to the additional video masks? I don't see it in the description
@hoptoad
@hoptoad 5 дней назад
this is great! do you know if there is a way to "batch" many variations where you can give each of the four guidance images a folder and it will run through and do a new animation with different source images multiple times?
@BrianDressel
@BrianDressel Месяц назад
Excellent walkthrough of this, thanks.
@rowanwhile
@rowanwhile 2 месяца назад
Brilliant video. thanks so much for sharing your knowledge.
@cabb_
@cabb_ Месяц назад
ipiv did an incredible job with this workflow!. Thanks for the tutorial.
@pro_rock1910
@pro_rock1910 Месяц назад
❤‍🔥❤‍🔥❤‍🔥
@petertucker455
@petertucker455 15 дней назад
Hi Abe, I found the final animation output is wildly different in style & aesthetic from the initial input images. Any tips for retaining overall style? Also have you got this workflow to work with SDXL?
@aslgg8114
@aslgg8114 2 месяца назад
What should I do to make the reference image persistent
@Halfgawd_Halfdevil
@Halfgawd_Halfdevil Месяц назад
Managed to get this running. It does okay but I am not seeing much influence from the control net motion video input. Any way to make that more apparent? Also have notice a Shutterstock overlay near the bottom of the clip. it is translucent but noticeable. kind of ruins everything. anyway, to eliminate that artifact?
@Caret-ws1wo
@Caret-ws1wo 29 дней назад
Hey, my animations come out super blurry and are no where near as clear as yours. I can barely make out the monkey, it's just a bunch of moving brown lol. Is there a reason for this?
@amunlevy2721
@amunlevy2721 Месяц назад
Getting these errors that nodes are missing even when installed IP Adapter Plus... missing nodes: IPAdapterBatch and IPAdapterUnifiedLoader
@white_friend
@white_friend 4 дня назад
try to 'update all' in Manager Menu
@TheNexusRealm
@TheNexusRealm Месяц назад
cool, how long did it take you?
@evgenika2013
@evgenika2013 14 дней назад
Everything is great, but i have blurry result on my horizontal artwork. Any suggestion what to check on it?
@wagmi614
@wagmi614 Месяц назад
can could one add some kind of ip adaptar to add your own face to transform?
@ComfyCott
@ComfyCott Месяц назад
Dude I loved this video! You explain things very well and I love how you explain in detail as you build out strings of nodes! subbed!
@SapiensVirtus
@SapiensVirtus 11 дней назад
hi! beginners question. So if I run a software like ComfyUI locally, does that mean that all AI art, music, works that I generate will be free to use for commercial purposes?or am I violating terms of copyright? I am searching more info about this but I get confused, thanks in advance
@MichaelL-mq4uw
@MichaelL-mq4uw 2 месяца назад
why do you need controlnet at all? can it be skipped and morph without any mask?
@GiancarloBombardieri
@GiancarloBombardieri 11 дней назад
it worked so fine. but now it sends an error at the Load video Path, is there any update??
@unemployed9665
@unemployed9665 13 дней назад
How i can get progress bar like you on top of the screen? I must reainstall full comfy UI for this workflow. I instaled crystools but progress bar doesn't appear on top :/ Thank you for your video you are a god!
@CoqueTornado
@CoqueTornado 2 месяца назад
great tutorial, I am wondering... how many vram does this setup need?
@abeatech
@abeatech 2 месяца назад
i've heard of people running this successfully on as little as 8gb VRAM, but you'll probably need to turn of the frame interpolation. you can also try running this on the cloud at openart (but your checkpoint options might be limited): openart.ai/workflows/abeatech/tutorial-morpheus---morphing-videos-using-text-or-images-txt2img2vid/fOrrmsUtKEcBfopPrMXi
@CoqueTornado
@CoqueTornado 2 месяца назад
@@abeatech thank you!! will try the two suggestions! congrats for the channel!
@kwondiddy
@kwondiddy Месяц назад
I'm getting errors when trying to run... a few items that say "value not in list: ckpt_name:" "value not in list: lora_name" and "value not in list: vae_name:" I'm certain I put all the downloads in the correct folders and name everything appropriately.... Any thoughts?
@damird9635
@damird9635 3 дня назад
Working, but when i select "plus high strenght", i get clip vision error. What im i missing, i downloaded everything.... VIT-G is the problem for some reason?
@Ai_Gen_mayyit
@Ai_Gen_mayyit Месяц назад
Error occurred when executing VHS_LoadVideoPath: module 'cv2' has no attribute 'VideoCapture'
@chinyewcomics
@chinyewcomics 20 дней назад
Hi, does anybody know how to add more images to create a longer video?
@produccionesvoid
@produccionesvoid 23 дня назад
when i put on manager install missing nodes i cant do it and said: To apply the installed/updated/disabled/enabled custom node, please RESTART ComfyUI. And refresh browser... what can do that?
@tetianaf5172
@tetianaf5172 Месяц назад
Hi! I have this error all the time: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument mat1 in method wrapper_CUDA_addmm). Though I use 1.5 checkpoint. Please help
@brockpenner1
@brockpenner1 Месяц назад
ComfyUI threw an error in the VRAM Debug node of Frame Interpolation: Error occurred when executing VRAM_Debug: VRAM_Debug.VRAMdebug() got an unexpected keyword argument 'image_passthrough' Any help would be appreciated!
@saundersnp
@saundersnp Месяц назад
I've encountered this error : Error occurred when executing RIFE VFI: Tensor type unknown to einops
@TinyLLMDemos
@TinyLLMDemos Месяц назад
where do i get your input images
@user-vm1ul3ck6f
@user-vm1ul3ck6f 2 месяца назад
Help! I encountered this error while running it Error occurred when executing IPAdapterUnifiedLoader: Module 'comfy. model_base' has no attribute 'SDXL_instructpix2pix'
@abeatech
@abeatech 2 месяца назад
Sounds like it could be a couple of things: a) you might be trying to use an SDXL checkpoint - in which case try using a SD1.5. The AnimateDiff model in the workflow only works with SD1.5 or b) an issue with your IPAdapter node. you can yry making sure the ipadapter model is downloaded and in the right folder, or reinstalling the ComfyUI_IPAdapter_plus node (delete the custom node folder and reinstall from manager or github)
@frankiematassa1689
@frankiematassa1689 Месяц назад
Error occurred when executing IPAdapterBatch: Error(s) in loading state_dict for ImageProjModel: size mismatch for proj.weight: copying a param with shape torch.Size([3072, 1280]) from checkpoint, the shape in current model is torch.Size([3072, 1024]). I followed this video exactly and am only using SDL 1.5 checkpoints. I cannot find anywhere how to fix this
@AlexDisciple
@AlexDisciple 24 дня назад
Thanks for this. Do you know what could be causing this error : Error occurred when executing KSampler: Given groups=1, weight of size [320, 5, 3, 3], expected input[16, 4, 64, 36] to have 5 channels, but got 4 channels instead
@AlexDisciple
@AlexDisciple 24 дня назад
I figured out the problem, I was using the wrong ControlNet. I am having a different issue though, where my initial output is very "noisy", as if ther was latent noise all over it. Is it imporant for the source images to be in the same aspect ratio as the output?
@AlexDisciple
@AlexDisciple 23 дня назад
Ok found the solution here too, I was using a photorealistic model, which somehow the workflow doesn't seem to like. Switching to juggernaut fixed it
@Ai_Gen_mayyit
@Ai_Gen_mayyit Месяц назад
Error occurred when executing VHS_LoadVideoPath: module 'cv2' has no attribute 'VideoCapture' your video timestep: 04:20
@axxslr8862
@axxslr8862 Месяц назад
in my comfy UI there is no manager option ...... help please
@ESLCSDivyasagar
@ESLCSDivyasagar Месяц назад
search in youtube how to install
@ImTheMan725
@ImTheMan725 Месяц назад
Why can't your morph 20/50 pictures?
@AI-Efast
@AI-Efast Месяц назад
Why my generated animation very different from the reference images
@cohlsendk
@cohlsendk Месяц назад
Is there an way to increase frames/batch size for FadeMask?? Everything over 96 is messing up the Facemask -.-''
@cohlsendk
@cohlsendk Месяц назад
Got it :D
@WalkerW2O
@WalkerW2O Месяц назад
Hi Abe aTech, very informative and i like your work very much.
@yakiryyy
@yakiryyy 2 месяца назад
Hey! I've managed to get this working but I was under the impression this workflow will animate between the given reference images. The results I get are pretty different from the reference images. Am I wrong in my assumption?
@abeatech
@abeatech 2 месяца назад
You're right - it uses the reference images (4 frames vs 96 total frames) as a starting point and generates additional frames, but the results should still be in the same ball park. if you're getting drastically different results, it might be a mix of your subject + SD1.5 model. I've had the best results by using a similar type of model (photograph, realism, anime, etc) for both the image generation and the animation
@AI-Efast
@AI-Efast Месяц назад
@@abeatech Is there any way to make the result more like reference images
@TinyLLMDemos
@TinyLLMDemos Месяц назад
how do i kick it off?
@devoiddesign
@devoiddesign Месяц назад
Hi! any suggestion for missing IPAdapter? I am confused because i didn't get an error to install or update and I have all of the IPAdapter nodes installed... the process stopped on the "IPAdapter Unified Loader" node. !!! Exception during processing!!! IPAdapter model not found. Traceback (most recent call last): File "/workspace/ComfyUI/execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "/workspace/ComfyUI/execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "/workspace/ComfyUI/execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "/workspace/ComfyUI/custom_nodes/ComfyUI_IPAdapter_plus/IPAdapterPlus.py", line 453, in load_models raise Exception("IPAdapter model not found.") Exception: IPAdapter model not found.
@tilkitilkitam
@tilkitilkitam Месяц назад
same problem
@tilkitilkitam
@tilkitilkitam Месяц назад
ip-adapter_sd15_vit-G.safetensors - install this from the manager
@devoiddesign
@devoiddesign Месяц назад
@@tilkitilkitam Thank you for responding. I already had the model installed but it was not seeing it. I ended up restarting Comfy completely after I updated everything from the manager instead of only doing a hard refresh and that fixed it.
@creed4788
@creed4788 Месяц назад
Vram required?
@Adrianvideoedits
@Adrianvideoedits Месяц назад
16gb for upscaled
@creed4788
@creed4788 Месяц назад
@@Adrianvideoedits Could you make the videos first and then close and load the upscaler to improve the quality or does it have to be all together and it can't be done in 2 different workflows?
@Adrianvideoedits
@Adrianvideoedits Месяц назад
@@creed4788 I dont see why not. But upscaling itself takes most vram so you would have to find upscaler for lower vram cards
@Adrianvideoedits
@Adrianvideoedits Месяц назад
you didnt explain most important part, which is how to run same batch with and without upscale. It generates new batches everytime you queue prompt so preview batch is waste of time. I like the idea though.
@7xIkm
@7xIkm 14 дней назад
idk maybe a seed? efficiency nodes?
@rooqueen6259
@rooqueen6259 Месяц назад
Guys who have come across the fact that the loading 2 new models stops at 0% or I also had an example - the loading 3 new models is 9% and no longer continues. What is the problem? :c
@ErysonRodriguez
@ErysonRodriguez 2 месяца назад
noob question: why my results more different from my output
@ErysonRodriguez
@ErysonRodriguez 2 месяца назад
i mean, what images i loaded have different output instead transitioning
@abeatech
@abeatech 2 месяца назад
The results will not exactly be the same, but they should still be in the same ball park. if you're getting drastically different results, it might be a mix of your subject + SD1.5 model. I've had the best results by using a similar type of model (photograph, realism, anime, etc) for both the image generation and the animation. Also worth double checking that you have the VAE and LCM lora selected in the settings module
@anthonydelange4128
@anthonydelange4128 3 часа назад
its morbing time...
@3djramiclone
@3djramiclone Месяц назад
This is not for beginners, put that on the description mate
@kaikaikikit
@kaikaikikit Месяц назад
what are you are crying about...go find a beginner class when it's too hard to understand...
@zems_bongo
@zems_bongo 25 дней назад
i don't understand why its doesnt work with me, i get this type of messages Error occurred when executing CheckpointLoaderSimple: 'NoneType' object has no attribute 'lower' File "/home/ubuntu/ComfyUI/execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "/home/ubuntu/ComfyUI/execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "/home/ubuntu/ComfyUI/execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "/home/ubuntu/ComfyUI/nodes.py", line 516, in load_checkpoint out = comfy.sd.load_checkpoint_guess_config(ckpt_path, output_vae=True, output_clip=True, embedding_directory=folder_paths.get_folder_paths("embeddings")) File "/home/ubuntu/ComfyUI/comfy/sd.py", line 446, in load_checkpoint_guess_config sd = comfy.utils.load_torch_file(ckpt_path) File "/home/ubuntu/ComfyUI/comfy/utils.py", line 13, in load_torch_file if ckpt.lower().endswith(".safetensors"):
@miukatou
@miukatou 27 дней назад
I'm sorry, I need help. I'm a complete beginner. I can't find any sd 1.5 model any . Where to download it? ipadapter,I cannot find this folder in my model path. Do I need to create a folder named ipadapter myself?🥲🥲
@user-vm1ul3ck6f
@user-vm1ul3ck6f 2 месяца назад
Help! I encountered this error while running it
@user-vm1ul3ck6f
@user-vm1ul3ck6f 2 месяца назад
Error occurred when executing IPAdapterUnifiedLoader : module 'comfy.model base’ has no attribute 'SDXl instructpix2pix
@abeatech
@abeatech 2 месяца назад
Sounds like it could be a couple of things: a) you might be trying to use an SDXL checkpoint - in which case try using a SD1.5. The AnimateDiff model in the workflow only works with SD1.5 or b) an issue with your IPAdapter node. you can yry making sure the ipadapter model is downloaded and in the right folder, or reinstalling the ComfyUI_IPAdapter_plus node (delete the custom node folder and reinstall from manager or github)
@Halfgawd_Halfdevil
@Halfgawd_Halfdevil Месяц назад
@@abeatech it say s in the note to install it in the clip vision folder. but that is not it as none of the preloaded models are there and the new one installed there does not appear in the dropdown selector. so if it is not that folder then where are you supposed to install it? if the node is bad why is it used in the work flow in the first place? shouldn't it just have the ipadapter plus node?
Далее
Must-have gadget for every toilet! 🤩 #gadget
00:27
Yangi uylanganlar😂😂😂
01:01
Просмотров 745 тыс.
How to AI Upscale and Restore images with Supir.
16:31
CONSISTENT VID2VID WITH ANIMATEDIFF AND COMFYUI
20:10
Unveiling the Game-Changing ComfyUI Update
9:25
Просмотров 4,7 тыс.
Will the battery emit smoke if it rotates rapidly?
0:11
AI от Apple - ОБЪЯСНЯЕМ
24:19
Просмотров 127 тыс.