Тёмный

Easy Image to Video with AnimateDiff (in ComfyUI)  

Koala Nation
Подписаться 2,9 тыс.
Просмотров 32 тыс.
50% 1

Опубликовано:

 

2 окт 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 99   
@koalanation
@koalanation 4 месяца назад
If you prefer it without the ai voice, check out with original voice (in Spanish) ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-GsOTnGeXCCg.html
@tech3653
@tech3653 2 месяца назад
any toturoal for easy translation of voice offline using ai ?
@Kratos30000
@Kratos30000 2 месяца назад
Which GPU do you need for these kind of animations?
@DanielThiele
@DanielThiele Месяц назад
honestly, its really good for ai voice. no hablo espanol senior. muchas gracias. :D
@koalanation
@koalanation Месяц назад
@@DanielThiele 🤣🤣🤣
@chrisfletcher2646
@chrisfletcher2646 24 дня назад
I've been frustrated with AI voiceovers for a lot of stuff i watch, but now that I know you did this as a non-native language it makes sense and I'm grateful as well. Muchas muchas gracias!!!!!!!
@sohamkokate5794
@sohamkokate5794 8 дней назад
Hi! "Given groups=1, weight of size [320, 5, 3, 3], expected input[16, 4, 96, 64] to have 5 channels, but got 4 channels instead" can you help?
@koalanation
@koalanation 7 дней назад
Seems to be related with one of controlnet or animatediff models. Try to change or bypass, one by one, the controlnet and animatediff nodes and see if the workflow runs. When you have found where the issue is, check the model is correct.
@VanessaSmith-Vain88
@VanessaSmith-Vain88 4 месяца назад
Can you set up the whole thing for us to use it?
@lildrill
@lildrill Месяц назад
🤣😂
@BeratLjumani
@BeratLjumani 12 дней назад
mostly ok tutorial but the main issue I have is that maybe this needs a part 1 showing all the files you downloaded for the load check point, Lora loader model etc... because if you don't have that stuff your just left scrambling to civit to try and find the same files you use and that's annoying and confusing.
@koalanation
@koalanation 11 дней назад
Got it. My assumption is that the viewer already knows it, as I have shown how it is done (for other models) in other videos. But I see that is not true for everyone. However, if I show it always may become repetitive...I may make a short video showing how it is done and refer to it in other videos.
@boo3321
@boo3321 4 месяца назад
very easy tutorial only took me HOURS to do it, I'm curious how to make people walk or move with comfyui.
@koalanation
@koalanation 4 месяца назад
Well, I cut quite a bit to show only the main steps, otherwise the video is mostly rendering... For moving people, controlNet with a reference video of someone walking is probably the way. With motion director should also be possible, I believe, but I need to find the time to try and see if results are 👍
@lechu89
@lechu89 16 дней назад
Hi! "IPAdapterUnifiedLoader - ClipVision model not found." can you help?
@koalanation
@koalanation 15 дней назад
Hi! This node was supposed to simplify combining the IP adapter and clipvision models, but for some systems it seems it gives more problems than a solution. My advice would be to use the IP adapter model loader and Clipvision model loader separately, and connect them (and the model) independently to the IP Adapter node
@LuigiEspositoGraphic
@LuigiEspositoGraphic Месяц назад
it works well, but the details are extremely lower than the original image, how can I fix it?
@koalanation
@koalanation 26 дней назад
You may want to increase the tile settings, but yeah, the method do changes things. Try also to use a checkpoint corresponding to the style of the original image (realistic, cartoon, anime...), to get better adherence. Those are some ideas....obviously, image upscaling or a second AD pass may also help
@VanessaSmith-Vain88
@VanessaSmith-Vain88 4 месяца назад
Yeah, that was really easy, piece of a cake 🤣
@koalanation
@koalanation 4 месяца назад
Yes, it was 🤣
@ManuelViedo
@ManuelViedo 3 месяца назад
"easy"
@koalanation
@koalanation 3 месяца назад
🤣🤣🤣
@user-Cyrine
@user-Cyrine 3 месяца назад
Love your videos so much! Can you make a tutorial video on FlexClip’s AI tools? Really looking forward to that!
@koalanation
@koalanation 3 месяца назад
Thanks for the idea!
@tianhayamizu8815
@tianhayamizu8815 7 дней назад
hello,I often watch your videos to learn. Could you explain how to create a long animation, like one over 10 seconds, using Animatediff? thank you!
@koalanation
@koalanation 7 дней назад
With AnimateDiff WITH context options is possible to do animations as long as your machine can handle. If you do, for example, 8 fps, you need 80 frames. The number of frames is defined by the batch number in an empty latent or equivalent connected to the k sampler
@tianhayamizu8815
@tianhayamizu8815 6 дней назад
@@koalanation I'm a beginner and really don't understand how to go about it, especially since I have no experience in animation. If you have the time, could you possibly create a tutorial video on how to make a short film using Animatediff? I would really appreciate it and am very eager to learn more. Thank you so much!
@tianhayamizu8815
@tianhayamizu8815 6 дней назад
@@koalanation Could I ask, in case my computer isn't powerful enough, would it be possible to generate a few seconds of video at a time and then use other software to stitch them together into a short film that's a few minutes long?
@koalanation
@koalanation 6 дней назад
@@tianhayamizu8815 you can always make short clips and then stitch them with a regulard video editor
@koalanation
@koalanation 6 дней назад
This is maybe too detailed for you, but give it a try: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-mQxg7srKjxI.htmlsi=X5XLYro5PQ1pfOYE
@estebanmoraga3126
@estebanmoraga3126 3 месяца назад
Thans for the tutorial! Question: Is it possible to feed Comfy with a reference video for it to animate the image using said video as reference? Like, say I have an image of a character, and I give Comfy a video of someone skateboarding, is there a method with which I could get comfy to animate the character skateboarding based on the video? Cheers and thanks in advance!
@koalanation
@koalanation 3 месяца назад
Yes! You can use a reference video and use controlnets such as openpose, depth, lineart, etc, to guide the composition of each frame. There are many videos and tutorials about it.
@estebanmoraga3126
@estebanmoraga3126 3 месяца назад
@@koalanation Thanks for replying! The most I've been able to find are tutorials on animating a referenced image using prompts or generating a video using another video as reference also using prompts, have yet to find one where they animate a reference image based on a reference video, guess I just have to look harder tho!
@koalanation
@koalanation 3 месяца назад
Check out: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-XO5eNJ1X2rI.html. Take into account this is rather complex with all the samplers and so on. Here: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-Ka4ENd63VBo.html, I think it is more clear, but take into account the IP Adapter node it does not work like in the video anymore.
@dschony
@dschony 2 месяца назад
It was a little problematic to install all this modules and nodes. The webui crashed and I had to update it, recover the venv and also to reinstall comfyui's dependencies ... it took hours. Nothing for newbies.
@dschony
@dschony 2 месяца назад
Ok. Found out, that compared to the time it takes for generation, this little time to fix the environment is nothing. But I like the tutorial ;)
@VolodymyrKarpov
@VolodymyrKarpov Месяц назад
​@@dschony Hey) Which GPU do you use and how much time does it take to generate 1-2 sec video?
@dschony
@dschony Месяц назад
@@VolodymyrKarpov GPU: NVIDIA GeForce RTX 3060 / 8GB VRAM. It takes about 1 hour for 2 sec with a frame rate of 30, or 1 min per picture. It depends of the used models and nodes, of the settings (steps) and more.
@dschony
@dschony Месяц назад
Well, I found, that it's better not to use the stable diffusion WebUI with the ComfyUI extension, but to use a separate standalone installaion of ComfyUI with it's own environment.
@koalanation
@koalanation Месяц назад
Good you could find a work around. Sometimes all custom nodes and models can be tricky in comfyui with all updates and constant changes. Thanks for providing such good advice to others!
@Shahriar.H
@Shahriar.H Месяц назад
ModelPatcherAndInjector.patch_model() got an unexpected keyword argument 'lowvram_model_memory' I'm getting the above error on the KSampler before VAE encode node. How do I fix this? Edit: I'm using Stability Matrix to run ComfyUI if that's a relevant information.
@koalanation
@koalanation 26 дней назад
Hi, There was an issue raised in the AD github, they said it should have been fixed. Try to update both the AnimateDiff Evolved nodes and ComfyUI. I do not know how is that done in Stability Matrix...in ComfyUI, normally I do it via the Manager.
@hamster_poodle
@hamster_poodle 4 месяца назад
hello! Does SparseControl work with AnimateDiff LCM properly? not V3?
@koalanation
@koalanation 4 месяца назад
Hi! With the V3 lora adapter works. I am not sure if that is the way it was intended, but it does something. I have tried to use the RGB sparse but I do not manage to get it work nicely...you can also switch to version 3 and fine tune results, but obviously generations will take longer
@marcdevinci893
@marcdevinci893 2 месяца назад
I carefully followed and really want to get this going but getting a KSampler error. 'Given groups=1, weight of size [320, 5, 3, 3], expected input[32, 4, 96, 64] to have 5 channels, but got 4 channels instead'
@koalanation
@koalanation 2 месяца назад
Try to change or bypass, one by one, the controlnet and animatediff nodes and see if the workflow runs.
@HOT4C1DR41N
@HOT4C1DR41N 3 месяца назад
I couldn't make it work :( I get this error every time: Error occurred when executing ADE_ApplyAnimateDiffModel: 'MotionModelPatcher' object has no attribute 'model_keys'
@koalanation
@koalanation 3 месяца назад
Seems odd...are you using AnimateLCM_t2v? Maybe try with other model to see if it runs, or use the gen 1 AnimateDiff Loader
@katonbunshin5935
@katonbunshin5935 3 месяца назад
I have the same
@koalanation
@koalanation 3 месяца назад
Use the model at: civitai.com/models/452153/animatelcm and make sure nodes and comfy is up to date.
@katonbunshin5935
@katonbunshin5935 3 месяца назад
@@koalanation oh... i wrote solution here but i dont know why is it not added... so... in my situation, there was problem when i was updating AnimateDiff from Manager. To fix it remove AnimateDiff from custom nodes and get AnimateDiff from repo, then place it in Custom nodes - works for me
@koalanation
@koalanation 3 месяца назад
Ok! I have not see it neither...anyway, sometimes during updates this things happen
@jaydenvincent2007
@jaydenvincent2007 3 месяца назад
when I click Queue Prompt is says "TypeError: this is undefined" and nothing happen. I have all required nodes/models, and comfyui updated/restarted. can you please help?
@koalanation
@koalanation 3 месяца назад
Hi! I have never encountered this error...googling it refers to an issue with MixLab nodes...not sure if that would be your case. Maybe try to disable or uninstall custom nodes to see if there is one affecting ComfyUI.
@policani
@policani 2 месяца назад
Sparse Control Scribble is also difficult to search for. I have no results for all three words, and three results for Control Scribble.
@koalanation
@koalanation 2 месяца назад
The models are here: huggingface.co/guoyww/animatediff/tree/main
@Rachelcenter1
@Rachelcenter1 Месяц назад
4:53 those blur effects you put over your video make it hard to see what you're doing.
@koalanation
@koalanation Месяц назад
Yep, I got too enthusiastic with the video effects when editing the video...promise not to overdo next time
@doctorrisk
@doctorrisk Месяц назад
Is it possible to input 2 pictures and have AI make a video transitioning from one to the other?
@koalanation
@koalanation 26 дней назад
Yes, you can transition using masks. Check out my morphing and audioreactive videos to get an idea
@Rachelcenter1
@Rachelcenter1 Месяц назад
4:54 I got to this part of the tutorial, my workflow is at 88% ksampler and then the word "reconnecting" came over the screen. terminal: [AnimateDiffEvo] - INFO - Using motion module AnimateLCM_sd15_t2v.ckpt:v2. Unloading models for lowram load. UserWarning: resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown warnings.warn('resource_tracker: There appear to be %d ' (I have a 128gb ram computer)
@koalanation
@koalanation Месяц назад
Hi! That looks a bit odd having 128 gb and stopping at 88%..sometimes comfyui crashes when the CPU is overloaded...try to test with less frames or smaller ones to see if that is the case.
@Rachelcenter1
@Rachelcenter1 Месяц назад
@@koalanation when a video loader box is present you can go to Select_every_nth = if you put 1 its going to generate every frame of the video. If you choose 2 its going to generate every other frame of the video.... Since you dont have that box, what is the equivalent in your workflow?
@Rachelcenter1
@Rachelcenter1 Месяц назад
@@koalanation i tried 16 frames and all it gave me was an all black box
@vl7823
@vl7823 3 месяца назад
hey i'm getting this error "Could not allocate tensor with 828375040 bytes. There is not enough GPU video memory available!" I have an AMD Rx6800Xt 16gb vram, any workaround or fix? Thanks
@koalanation
@koalanation 3 месяца назад
Hey! Not sure what the messages are with AMD, but maybe you can try first reducing the size of the latents and/or reducing the batch size. Looks like some limitation with the VRAM.
@VolodymyrKarpov
@VolodymyrKarpov Месяц назад
hey! Were you able to fix and launch it?
@user-cb4jx8og2k
@user-cb4jx8og2k 3 месяца назад
great video you skipped some steps but still detailed. Question, do we not need to change the text prompt for each randomized pic > Also why did you use Load video path node for an image ?
@koalanation
@koalanation 3 месяца назад
Hi! In principle you do not need to change it, but you can, of course. Take into account the 'tile' control net is rather strong and you cannot do big transformations. The Load video node allows you to use http addresses, but the Image load node not (at least did not work for me). That is why I use it for the randomized image.
@SiverStrkeO1
@SiverStrkeO1 3 месяца назад
Great video! I'm new to all that and im wondering of there is a way to keep the details. I'm trying to use a city skyline as img to video, and there for example, a lot of windows are getting removed.
@koalanation
@koalanation 3 месяца назад
That seems difficult with this method If the windows are small. Reducing the scale factor may work. Otherwise some trick with masks and controlnets may work but I have not really tried it with sparsectrl
@MarcusBankz68
@MarcusBankz68 3 месяца назад
I'm getting an error with IPAdapterUnifiedLoader, says clipvision model not found. I've downloaded a few versions and put them in my clip_vision folder but still getting the error. Is there a specific one for this node?
@koalanation
@koalanation 3 месяца назад
Sometimes with IP adapter is confusing...try to use the IP adapter model and clipvision separately (without using the unified loader) following the instructions of the IP adapter repo. I like plus and VIT-G. github.com/cubiq/ComfyUI_IPAdapter_plus?tab=readme-ov-file
@ForChiddlers
@ForChiddlers 9 дней назад
@@koalanation I got this IP Adapter Clipvision error as well. What can we do there concrete? It seems, that an Ip Adapter has to be fed into the IPAdapter Unified loader left input param. But where does it come from? And why is it working without that on your machine?
@koalanation
@koalanation 9 дней назад
​@@ForChiddlersIt only needs the model as input. The preset should load IP adapter and clipvision, but the node sometimes messes up. In case of issues, it is better to use the Clipvision loader and the IPAdapter loader individually, and connect them directly to the IPAdapter Apply Node (without the Unified Loader)
@ristom1
@ristom1 2 месяца назад
Kickass video man!!! Im trying to learn cool AI like this for music viduals this is 10/10 cool. Gonna also do blender renders as bases and use ai to make them trippy. Have any tutorials for video to video?
@koalanation
@koalanation Месяц назад
Check out the morphing and audioreactive videos. Using masks is more complicated but gives you more power to play with
@ristom1
@ristom1 Месяц назад
@@koalanation thank you!!!
@frankliamanass9948
@frankliamanass9948 4 месяца назад
It all worked and animates the image but every time it comes out very bright and faded. Any suggestion on how to fix it?
@frankliamanass9948
@frankliamanass9948 4 месяца назад
It appears the results in the tutorial are also faded and over brightened but at the end when you show examples they look fine. Did you find a fix or was it in your post processing?
@koalanation
@koalanation 4 месяца назад
Depending on the source of image, settings, etc, the image might be too dark or too bright, as you say. There are nodes that do that. I like Image Filter adjustments. But I think it is better to use a regular video editor, it is faster and easier to use.
@joonienyc
@joonienyc 4 месяца назад
hey buddy , how did u copy second Ksampler with all lines connected duplicated ? at time line 4:40
@koalanation
@koalanation 4 месяца назад
Copy normally with ctrl+c, then paste with ctrl+shift+v.
@joonienyc
@joonienyc 4 месяца назад
@@koalanation ty my man
@SemorezX
@SemorezX 29 дней назад
awesome works , thank u so much
@kizentheslayer
@kizentheslayer 2 месяца назад
where do i save teh animate lcm model to?
@koalanation
@koalanation 2 месяца назад
models/animatediff_models
@YING180
@YING180 3 месяца назад
thank you for your video, that's very helpful
@elifmiami
@elifmiami 2 месяца назад
I was wondering how did you bring node number on the box ?
@koalanation
@koalanation 2 месяца назад
If you go to the Manager, on the left column you will see the option 'Badge'. There you can set the number of the node to appear over the node.
@elifmiami
@elifmiami 2 месяца назад
@@koalanation thank you !
@generalawareness101
@generalawareness101 2 месяца назад
yeah, no to SD1.5 anything.
@AB-wf8ek
@AB-wf8ek 2 месяца назад
Did SD1.5 hurt your feelings?
@bordignonjunior
@bordignonjunior 4 месяца назад
Geeez this takes long to run. which gpu do you have? amazing tutorial !!!
@koalanation
@koalanation 3 месяца назад
Hi! Thanks! I am using a RTX4090/3090 or A5000 via Runpod, which generates the video rather fast. You can try to decrease the number of frames and also the resolution of the images. Try to do interpolation with 3 frames instead of 2.
@kargulo
@kargulo Месяц назад
I have 4060 with 16G and 50 % is taking 15 min , that is first creating , I hope next one will be faster :)
@SeanOdyssey
@SeanOdyssey Месяц назад
Thankyow
Далее
ComfyUI - Best way to convert image to video
9:16
Просмотров 7 тыс.
LEARN ANIMATE-DIFF IN 10 MINUTES (EASY FOR BEGINNERS!)
10:25
AI Video Tools Are Exploding. These Are the Best
23:13
Просмотров 171 тыс.