Тёмный

AnimateDiff + Instant Lora: ultimate method for video animations ComfyUI (img2img, vid2vid, txt2vid) 

Подписаться
Просмотров 35 тыс.
% 816

Combine AnimateDiff and the Instant Lora method for stunning results in ComfyUI. Easy to learn and try.
#animatediff #comfyui #stablediffusion
============================================================
💪 Support this channel with a Super Thanks or a ko-fi! ko-fi.com/koalanation
☕ Amazing ComfyUI workflows: tinyurl.com/y9v2776r
🚨 Use Runpod and access powerful GPUs for best ComfyUI experience at a fraction of the price. tinyurl.com/58x2bpp5 🤗
☁️ Starting in ComfyUI? Run it on the cloud without installation, very easy! ☁️
👉 RunDiffusion: tinyurl.com/ypp84xjp 👉15% off first month with code 'koala15'
👉 ThinkDiffusion: tinyurl.com/4nh2yyen
🤑🤑🤑 FREE! Check my runnable workflows in OpenArt.ai: tinyurl.com/2twcmvya
============================================================
In this video I will show you how to install all the nodes and models required for AnimateDiff and the Instant Lora method with IP Adapters, in ComfyUI. Below the complete list of requirements.
In the video you will learn how to use the method with a simple example. With the addition of ControlNet, this method is AMAZING! Use you creativity to make something amazing!
Please, support this channel with a ko-fi!
ko-fi.com/koalanation
Run Comfyui with:
RXT 3090 24GB: tinyurl.com/27x45jy4
This video is inspired by many videos being recommended about AnimateDiff and Instant Lora. However, I was not able to find a video which combines these two approaches, so I decided to make this one. However, the recognition should go to the original creators of the separated methods:
AnimateDiff (original): tinyurl.com/4hfvv34r
AnimateDiff Evolved: tinyurl.com/yrwz576p
Aloe Vera's Instant Lora method (no training): tinyurl.com/yc4wmstr
Workflow for this video: tinyurl.com/3terxszu
Basic requirements:
ComfyUI: tinyurl.com/24srsvb3
ComfyUI Manager: tinyurl.com/ycvm4e29
Vast.ai: tinyurl.com/5n972ran
WAS node Suite: tinyurl.com/2ajuh2mx
OpenPose frames:tinyurl.com/y23cfnk4
Girl Exercising: tinyurl.com/3dsrkzy9
GeminiX Mix (civit.ai): tinyurl.com/2ssudwhf
GeminiX Mix (huggingface): tinyurl.com/nfxphz48
ControlNet v1.1: tinyurl.com/je85785u
Aloe Vera's Instant Lora (no training) requirements
Method: tinyurl.com/yc4wmstr
IP Adapter: tinyurl.com/3x3f2rfw
ComfyUI Impact pack: tinyurl.com/4jsmf8va
ComfyUI Inspire pack: tinyurl.com/2wkzezxm
IP Adapter bin model: tinyurl.com/2p8ykxf6
IP Adapter clipvision encoder: tinyurl.com/2wrtvnx4
AnimateDiff (Evolved)
AnimateDiff Evolved: tinyurl.com/yrwz576p
Advanced ControlNet custom node: tinyurl.com/yc3szuuf
VideoHelper Suite: tinyurl.com/47hka2nn
ControlNet Auxiliary preprocessors: tinyurl.com/3j3p6bjw
AnimateDiff models (mm and mloras): tinyurl.com/mw6jwzpk
AnimateDiff stabilized models: tinyurl.com/mr42m5hp
AnimateDiff finetune models: tinyurl.com/mud8cr69
AnimateDiff temporalDiff: tinyurl.com/yc8tnnas
Tracklist:
00:00 Intro
00:11 Requirements (basic, Instant Lora and AnimateDiff)
00:33 AnimateDiff and Instant Lora [ no training] methods
00:50 Base images and models
01:31 Installation of Custom Nodes needed (via Manager)
02:52 Installation of models for AnimateDiff and Instant Lora (IP Adapter) and ControlNet (OpenPose)
04:07 Using AnimateDiff + OpenPose template in your workflow
04:34 AnimateDiff workflow setup and testing
06:54 Implementing the Instant Lora method [no training] with IP Adapters in the AnimateDiff workflow
08:33 Adding FaceDetailer to improve the face resolution
09:57 Running all the frames/poses of the ComfyUI workflow
10:27 The final result
10:47 Outro
My other tutorials:
ComfyUI animation tutorial: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-Spc_F57FmK8.html
Vast.ai: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-B9va_h1olkk.html
TrackAnything: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-HoTnTxlwdEw.html
Videos: Pexels
Music: RU-vid Music Library
Edited with Canva, Runway.ml and ClipChamp
Subscribe to Koala Nation Channel: cutt.ly/OZF0UhT
© 2023 Koala Nation
#comfyui #animatediff #sd

Наука

Опубликовано:

 

24 окт 2023

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 107   
@proyectodigital7915
@proyectodigital7915 8 месяцев назад
Great, thanks, I'm going to follow this as soon as get my vacations. God bless
@koalanation
@koalanation 8 месяцев назад
Have fun!
@bizarreadventurejojos5379
@bizarreadventurejojos5379 8 месяцев назад
amazing video
@AlexanderVinogradov-nd9xx
@AlexanderVinogradov-nd9xx 8 месяцев назад
Cool!!! Such simple and logical !!! Man you my hero!!!))))
@koalanation
@koalanation 8 месяцев назад
Thanks! I am happy you like it.
@AlexanderVinogradov-nd9xx
@AlexanderVinogradov-nd9xx 8 месяцев назад
@@koalanation But i am stuck here . ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-Ka4ENd63VBo.htmlfeature=shared&t=277 Everything ok but I don't understand how to use open pose In which folder I should put open pose and what format of open pose should be
@koalanation
@koalanation 8 месяцев назад
Copy the OpenPose images in ComfyUI/input/openpose_animation
@TheTruthIsGonnaHurt
@TheTruthIsGonnaHurt 5 месяцев назад
liked and subscribed ❤❤
@user-zc9eh1qn5s
@user-zc9eh1qn5s 8 месяцев назад
You are the BEST😘
@PaboBabo
@PaboBabo 8 месяцев назад
wow!
@rtberbary0101
@rtberbary0101 8 месяцев назад
hey! im pretty new to comfyui and this video is epic! is there a chance that we can get the file to load it directly? i found it to be the best way for me to understand what is missing, and what is not working and so on
@koalanation
@koalanation 8 месяцев назад
I will. I want to check a couple of sites where it is better to share the workflows. I am not happy with comfy.icu and want to see if I put it in civit.ai or make a GitHub repo.
@koalanation
@koalanation 8 месяцев назад
Hi, check out what I uploaded in civit.ai and see if you can load it from the files there: tinyurl.com/3terxszu
@TeamPhlegmatisch
@TeamPhlegmatisch 7 месяцев назад
Can you also do something without running? Or are you bound to that posing model. What if I want to make a full animated video including different objects and scenarios which do not run?
@koalanation
@koalanation 7 месяцев назад
Yes, you can experiment with other objects and scenarios, as you want! I have used the OpenPose controlNet as example, to add some dynamism in the animation and be able to control the movements of the subject.
@happytoilet1
@happytoilet1 7 месяцев назад
many thanks for your tutorial. One question, after checkpoint model, shouldn't connect it to a LoRA loader and then connect to animatediff? Why do we use IP Adapter instead of a LoRA node? Thank you in advance. @@koalanation
@koalanation
@koalanation 7 месяцев назад
The IP Adapter is doing the function of the Lora Loader. Instead of a full model, we do it with one image. You do not need the Lora Loader. But you can also use it if you want, if you want to apply a certain style. Like with Loras, you can apply different weights for the different styles.
@Go_Siry
@Go_Siry 8 месяцев назад
Hello, thank you very much for the video! I only have one problem and it is that it takes me about 2houers to make a normal video, I am using it in stableswarm, I have 32GBRAM, and a 6GB 1060GTX, is that why?
@koalanation
@koalanation 8 месяцев назад
Hi! Both AnimateDiff and IP Adpater consume quite a bit of VRAM....6GB are in the low end, so it is not surprising it takes some time...start with few frames (set a cap in the Video/Image upload), and when is everything set, let the render work while you are doing something else.
@AI.ImaGen
@AI.ImaGen 8 месяцев назад
😀 I was looking for that... Looks very good and thx to share this tuto. I will try toonight. I know AnimateDiff since 3 days and I think it is the new IA challenge. For sure, in some years we will be able to make alone and at home a full movie... ComfyUI workflow is shared somewhere?
@koalanation
@koalanation 8 месяцев назад
You are welcome! And thanks to you for sharing! Things move very fast in AI. 1 year ago this was far from possible...who knows in 1 year from now... I still have to upload the workflow, sorry... Base is the one shown from Animatediff and just need to add the Lora workflow, pretty easy if you don't want to wait.
@koalanation
@koalanation 8 месяцев назад
Check out if you can get it from here: tinyurl.com/3terxszu
@nufh
@nufh 8 месяцев назад
I saw the Jupyter interface, which platform you're using? If want to use locally, what is the spec for PC?
@koalanation
@koalanation 8 месяцев назад
I am using a VM from vast.ai. Typically using RTX3090 or 4090 with 16 or 24 GB VRAM. Instances launch in jupyterlab on Linux (not sure which version).
@nufh
@nufh 8 месяцев назад
@@koalanation compared with RumPod, which one is better?
@koalanation
@koalanation 8 месяцев назад
Vast.ai is cheaper, but you need to install everytime after you kill your instance. I am not testing runpod. Slightly more expensive, but in principle has storage (at a reasonable price)
@Mindset2Work
@Mindset2Work 8 месяцев назад
Hey, is it better to use temporaldiff in particular for the background consistency?
@koalanation
@koalanation 8 месяцев назад
I think temporaldiff improves the consistency of the background, but I cannot find settings which really provide a really consistent background. At least with this workflow/videoframes. I did manage to get a nicer background with inpainting the lady in the original frames (in a video editor), running the workflow with only the background (adjusting the prompt to now show a person and with reduced number of frames), interpolating the resulting frames (to obtain the same number of frames as the original animation), and finally blending background and the lady running (also in a video editor). I am checking how would look like with either inpaiting/masks and/or with depth maps. If I obtain interesting results I may make another video explaining it.
@Mindset2Work
@Mindset2Work 8 месяцев назад
@@koalanation Have you tried to used TemporalNet ?
@koalanation
@koalanation 8 месяцев назад
@@Mindset2Work I have a video with TemporalNet as ControlNet. It is with version 1. I will try it out as well to see if it improves the background consistency.
@Dave102693
@Dave102693 3 месяца назад
Is there a way just to take posses so they can be used in a 3d software? You know like Mixamo?
@koalanation
@koalanation 3 месяца назад
In ControlNet Preprocessors there is a node Save Pose Keypoints that maybe does what you are looking for...but I am not familiar with 3d software...
@IntiArtDesigns
@IntiArtDesigns 8 месяцев назад
Amazing. Can i download the completed workflow?
@koalanation
@koalanation 8 месяцев назад
Check out if you can get it from here: tinyurl.com/3terxszu
@mehradbayat9665
@mehradbayat9665 6 месяцев назад
Can you explain what you mean when you mention to copy the files inside input folder of comfy ui @ 1:03? Where exactly am I pasting the files that I downloaded?
@koalanation
@koalanation 6 месяцев назад
In the folder where you have installed ComfyUI, there is a subfolder called 'input'. Copy the files there.
@mehradbayat9665
@mehradbayat9665 6 месяцев назад
is this applicable for masks as well?@@koalanation
@koalanation
@koalanation 5 месяцев назад
If you mean the openpose skeletons, you copy them in 'input>openpose_full' or any subfolder you create. This subfolder name will appear later in the Load Image (upload) node you are going to use for the controlnet.
@_WhatsUp_bro_
@_WhatsUp_bro_ 8 месяцев назад
Hello sir..... How much GPU VRAM and RAM Do you use in your Computer for using this Ai editing?
@koalanation
@koalanation 8 месяцев назад
Good question! I am normally using a VM from vast.ai with 24 GB of VRAM. From most of the projects 12 or 16 are good enough, but I think Animatediff used quite a lot, especially with a lot of frames. I cannot give you a recommended number, but I think some 'power' is needed for videos/animations.
@user-yb5es8qm3k
@user-yb5es8qm3k 7 месяцев назад
In node Load lmages (Upload), image load cap, why can only fill in 1, fill in 4 or 16 will report an error
@koalanation
@koalanation 7 месяцев назад
Where is the error? In the Load (Upload) node? Try to use a 0, that says that it process all the frames (any number you have). Check also the directory is pointing to an existing one.
@_WhatsUp_bro_
@_WhatsUp_bro_ 8 месяцев назад
Background consistency????
@koalanation
@koalanation 8 месяцев назад
Good point, sir. You can have a separate workflow for the background by masking/inpainting the Lora character, and then rotoscoping/blend in a video editor. I did not want to show it in this video to keep it simple. From my previous video with masking, I realized it is not always easy to explain. Maybe I will make a second part, or an inpainting tutorial.
@_WhatsUp_bro_
@_WhatsUp_bro_ 8 месяцев назад
@@koalanation If u do this will be great. Have a great day sir.
@Kisuke686
@Kisuke686 5 месяцев назад
I'm never amazed by the name of tool being COMFYui while being anything but comfy to use.
@koalanation
@koalanation 5 месяцев назад
🤣🤣🤣
@Daduxio
@Daduxio 5 месяцев назад
Did you try A1111? 😂😂
@Kisuke686
@Kisuke686 5 месяцев назад
@@Daduxio using it daily, much comfier 👀
@Daduxio
@Daduxio 5 месяцев назад
@@Kisuke686 ive been using A1111 for some time and now with comfy it looks cleaner and easier to me!
@user-pc7ef5sb6x
@user-pc7ef5sb6x 8 месяцев назад
when i load the gif into Facedetailer, and into Combine, it outputs all the frames individually and not the GIF format
@user-pc7ef5sb6x
@user-pc7ef5sb6x 8 месяцев назад
never mind. i figured it out 🤣
@koalanation
@koalanation 8 месяцев назад
Good you find the answer! 😃 You need to convert images from batch to list, otherwise the node will not work.
@aitutor.
@aitutor. 8 месяцев назад
Hi may I ask if you are using AI voice in this tutorial?
@koalanation
@koalanation 8 месяцев назад
Yes, sure. It is one of the voices from Clipchamp. Not sure if AI or not...but does the work, I think.
@jiananlin
@jiananlin 8 месяцев назад
i got the error 'ClipVisionModel' object has no attribute 'processor' from ipadapter node.
@koalanation
@koalanation 8 месяцев назад
Try using the clipvision model advised in the Intant Lora method: tinyurl.com/2wrtvnx4 If it does not work, try uninstalling the IP Adapter node, and install the IP Adapter Plus node (from cubiq)
@NatWF
@NatWF 6 месяцев назад
Hello! How to prevent the face from flickering?
@koalanation
@koalanation 6 месяцев назад
For this video I used the normal Facedetailer, as it was best at the time. There is a new version for Animatediff now which should be better. Otherwise check out the method I use in my other video, which I think gives better face upscaling: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-gDUeqCErjt4.htmlsi=EyXRskjkwE_w_smJ
@rubenrodenburg4478
@rubenrodenburg4478 3 месяца назад
I dont know how to get the GeminixMix file into the program, i dont know in wich file im supposed to put it in.
@koalanation
@koalanation 3 месяца назад
Download the model (or any you like) from civit.ai. Copy the file in the model/checkpoints folder in your ComfyUI installation folder. Then, when you run ComfyUI, it should appear in the dropdown of the Load Checkpoint node.
@rubenrodenburg4478
@rubenrodenburg4478 3 месяца назад
Thanks, it worked@@koalanation
@senkodan
@senkodan 4 месяца назад
Ksampler is so fast for you! Mine is like 50 sec per it. How to opimise it?
@koalanation
@koalanation 4 месяца назад
Well, in the video I speed up the rendering so you do not have to wait...in general, try to have smaller resolutions, then upscale. And minimise controlnets...but at the end animations take a lot of rendering time...
@boratsk2052
@boratsk2052 7 месяцев назад
HI I cannot find node Image batch to image list pls help me
@koalanation
@koalanation 7 месяцев назад
ImpactPack>Util>Image batch to Image List
@boratsk2052
@boratsk2052 7 месяцев назад
@@koalanation when I reinstalled manager then I got the message install all missing nodes, I installed them and now I can add the node
@koalanation
@koalanation 7 месяцев назад
@@boratsk2052 Top! Good you can use! Enjoy!
@user-bg7jq2vr2o
@user-bg7jq2vr2o 7 месяцев назад
'image list to image batch' can not find this node,
@koalanation
@koalanation 7 месяцев назад
ImpactPack>Operation>Image List to Image Batch
@ehsankholghi
@ehsankholghi 4 месяца назад
how can i load video instead of pose? i mean video to video in this workflow
@koalanation
@koalanation 4 месяца назад
Pose is used for the controlnet. If you can also use a vid2vid using a load video node, the using a vae encoder and a more or less low denoise in the ksampler. Take into account the features of the original video will remain to some extent. You can also try with depthmaps, they work quite nice with AnimateDiff
@ehsankholghi
@ehsankholghi 4 месяца назад
@@koalanation im so new for comfyui can u explain it how can i switch ur workflow to vid2vid?
@ehsankholghi
@ehsankholghi 4 месяца назад
@@koalanation comfyui has limititon for rendering? i wanna render vid2vid with 1000 frame(1000 png file) and i got this error: numpy.core._exceptions._ArrayMemoryError: Unable to allocate 6.43 GiB for an array with shape (976, 1024, 576, 3) and data type float32
@whereincebu
@whereincebu 6 месяцев назад
I hope you can read this one: I always get an error when prompting: Error occurred when executing IPAdapter: Currently, AutocastCPU only support Bfloat16 as the autocast_cpu_dtype
@koalanation
@koalanation 6 месяцев назад
Sorry, I have never seen this error message...the only thing I can recommend is to have ComfyUI and all the nodes up to date.
@Antxnio
@Antxnio 5 месяцев назад
I´m having the same issue, IP adapter is not available por instalation :( @@koalanation
@niko_g_o
@niko_g_o 5 месяцев назад
Same error( is there any solution?
@ANGEL-fg4hv
@ANGEL-fg4hv 7 месяцев назад
What if I already have animate diff from auto1111
@koalanation
@koalanation 7 месяцев назад
I think there are extensions for AnimateDiff (just googled and found tinyurl.com/3ezcwdc3), so making animations should be ok. But the trick here is to combine it with the InstantLora method. That I do not know how can be done in Automatic1111. At the end, one of the strengths of comfy UI is the flexibility to play with single elements and combine them, so you can do these type of tricks shown in the video.
@sairampv1
@sairampv1 7 месяцев назад
what is the minimum vram do you think we need to run this?
@koalanation
@koalanation 7 месяцев назад
I think I read somewhere around 10GB, but I guess depending on settings 8gb may be possible, and higher is better
@sairampv1
@sairampv1 7 месяцев назад
@@koalanation damm i think i cant run it then(using laptop 3050,4gb vram),will try to make it work somehow if possible and give instructions if possible
@alexlee4157
@alexlee4157 6 месяцев назад
​@@sairampv1 I run a simple animated diff on 3060 4gb, but just 16 frames at 512x512 resolution, no upscaling
@sairampv1
@sairampv1 6 месяцев назад
@@alexlee4157 great i lost track of this comment as i was busy with something else, thx to you i will try it the settings you recommended 🥰
@minamo4012
@minamo4012 8 месяцев назад
Is 2080 good enough for these work or at minimum 3080?
@koalanation
@koalanation 8 месяцев назад
I do not know...I have been using a 3080 and 4080 in a virtual machine. But I guess it is worth trying. Good luck and let us know if it works!
@minamo4012
@minamo4012 8 месяцев назад
@@koalanation Oh, I'm just gonna buy a new pc but 4080 is not needed for my normal work so I'm thinking 3080.Is 3080 enough?
@koalanation
@koalanation 8 месяцев назад
I guess is ok...but I am using VM's, so I do not know how the difference will be
@minamo4012
@minamo4012 8 месяцев назад
@@koalanation ok thanks!
@mareck6946
@mareck6946 8 месяцев назад
you want as much vram as possible for higher resolutionsmore controlnets etc.
@CerbyBite
@CerbyBite 8 месяцев назад
Ip Adapter doesn't work for me... It is installed, the bin file is in the model folder of the node, clip vision is installed, restarted ComfyUi and can't search for IPAdapter. IP Adapter plus doesn't seem to work too. Anyone know a fix?
@CerbyBite
@CerbyBite 8 месяцев назад
Nvm, after reinstalling again and again, animatediff had to be reinstalled, cos it was for some reason gone and IP Adapter appeared in ComfyUi.
@koalanation
@koalanation 8 месяцев назад
If you install IP adapter plus node, the IP adapter bin file needs to be copied in the models folder of IP adapter plus from the IP adapter...I think it is best to copy/follow the links from the Aloe Vera's method page in civit.ai
@CerbyBite
@CerbyBite 8 месяцев назад
​@@koalanation Did that. I think i had to update Impact Pack. But it kinda didn't update, so had to reinstall that. Now I have to figure out why KSampler get "None" class to "to" or something like that. Gonna have to try with a fresh Workflow.
@WalkerW2O
@WalkerW2O 3 месяца назад
Hi @koalanation @CerbyBite , I am having problem with IPAdapter at 7:27min in the tutorial "Error occured when executing IPAdaptor" " Clipvision forward() got an unexpected keyword argument"Output_hidden_status"" . Do you know what happen or where I can solve this problem ? Thx in advance.
@fdn3435
@fdn3435 5 месяцев назад
Hi, "Load IPAdapter" is not working O_O Any suggestions?
@koalanation
@koalanation 4 месяца назад
Update comfyUI and or the noed. It is working in current workflows.
@_WhatsUp_bro_
@_WhatsUp_bro_ 8 месяцев назад
Hello sir....Upload a tutorial about Stable warfFusion + no flickering.
@koalanation
@koalanation 8 месяцев назад
Thanks for the suggestion. I like the results from warpfusion. I need to play a little bit more with it to see if I can do something nice to show.
@_WhatsUp_bro_
@_WhatsUp_bro_ 8 месяцев назад
@@koalanation ok. Thnx for reply.
@francsharma7276
@francsharma7276 6 месяцев назад
bro can we animate video without human
@koalanation
@koalanation 6 месяцев назад
Yes you can! But you have to figure out how to control the animation. With people or humanoid figures (e.g. a robot) it is easier to do vid2vid with openpose. With other stuff, you need to use edge detectors (canny, hed, lineart) or depth/normal maps. Everything is possible but it may need to use different techniques
@rajageethana5063
@rajageethana5063 8 месяцев назад
Me watching this with my intel i3😅
@koalanation
@koalanation 8 месяцев назад
Yes, animatediff asks for power...anyway, for sporadic use consider using rental sites, to practice and learn. Myself, I have no powerful computer and use a VM from vast.ai. runpod is also cheap. Thinkdiffusion or rundiffusion are very easy to set, but more expensive...but still cheaper than buying a new computer with high end GPU (unless you really want to use it a lot)
@kingpicolo111
@kingpicolo111 2 месяца назад
can i do this on my gtx 1650 6gb gpu
@koalanation
@koalanation 2 месяца назад
Not sure. Some time ago I think I responded the requirements were around 10 GB, but it could be possible for lower. Try to limit the number of frames and the resolution. Do not start stacking controlnets, neither. I would advice you to also check how to use AD with LCM. I have a couple of videos about using it, but limit the difficulty of the animation, as I tend to show fairly complex workflows. Cheers!