I've been frustrated with AI voiceovers for a lot of stuff i watch, but now that I know you did this as a non-native language it makes sense and I'm grateful as well. Muchas muchas gracias!!!!!!!
Seems to be related with one of controlnet or animatediff models. Try to change or bypass, one by one, the controlnet and animatediff nodes and see if the workflow runs. When you have found where the issue is, check the model is correct.
mostly ok tutorial but the main issue I have is that maybe this needs a part 1 showing all the files you downloaded for the load check point, Lora loader model etc... because if you don't have that stuff your just left scrambling to civit to try and find the same files you use and that's annoying and confusing.
Got it. My assumption is that the viewer already knows it, as I have shown how it is done (for other models) in other videos. But I see that is not true for everyone. However, if I show it always may become repetitive...I may make a short video showing how it is done and refer to it in other videos.
Well, I cut quite a bit to show only the main steps, otherwise the video is mostly rendering... For moving people, controlNet with a reference video of someone walking is probably the way. With motion director should also be possible, I believe, but I need to find the time to try and see if results are 👍
Hi! This node was supposed to simplify combining the IP adapter and clipvision models, but for some systems it seems it gives more problems than a solution. My advice would be to use the IP adapter model loader and Clipvision model loader separately, and connect them (and the model) independently to the IP Adapter node
You may want to increase the tile settings, but yeah, the method do changes things. Try also to use a checkpoint corresponding to the style of the original image (realistic, cartoon, anime...), to get better adherence. Those are some ideas....obviously, image upscaling or a second AD pass may also help
With AnimateDiff WITH context options is possible to do animations as long as your machine can handle. If you do, for example, 8 fps, you need 80 frames. The number of frames is defined by the batch number in an empty latent or equivalent connected to the k sampler
@@koalanation I'm a beginner and really don't understand how to go about it, especially since I have no experience in animation. If you have the time, could you possibly create a tutorial video on how to make a short film using Animatediff? I would really appreciate it and am very eager to learn more. Thank you so much!
@@koalanation Could I ask, in case my computer isn't powerful enough, would it be possible to generate a few seconds of video at a time and then use other software to stitch them together into a short film that's a few minutes long?
Thans for the tutorial! Question: Is it possible to feed Comfy with a reference video for it to animate the image using said video as reference? Like, say I have an image of a character, and I give Comfy a video of someone skateboarding, is there a method with which I could get comfy to animate the character skateboarding based on the video? Cheers and thanks in advance!
Yes! You can use a reference video and use controlnets such as openpose, depth, lineart, etc, to guide the composition of each frame. There are many videos and tutorials about it.
@@koalanation Thanks for replying! The most I've been able to find are tutorials on animating a referenced image using prompts or generating a video using another video as reference also using prompts, have yet to find one where they animate a reference image based on a reference video, guess I just have to look harder tho!
Check out: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-XO5eNJ1X2rI.html. Take into account this is rather complex with all the samplers and so on. Here: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-Ka4ENd63VBo.html, I think it is more clear, but take into account the IP Adapter node it does not work like in the video anymore.
It was a little problematic to install all this modules and nodes. The webui crashed and I had to update it, recover the venv and also to reinstall comfyui's dependencies ... it took hours. Nothing for newbies.
@@VolodymyrKarpov GPU: NVIDIA GeForce RTX 3060 / 8GB VRAM. It takes about 1 hour for 2 sec with a frame rate of 30, or 1 min per picture. It depends of the used models and nodes, of the settings (steps) and more.
Well, I found, that it's better not to use the stable diffusion WebUI with the ComfyUI extension, but to use a separate standalone installaion of ComfyUI with it's own environment.
Good you could find a work around. Sometimes all custom nodes and models can be tricky in comfyui with all updates and constant changes. Thanks for providing such good advice to others!
ModelPatcherAndInjector.patch_model() got an unexpected keyword argument 'lowvram_model_memory' I'm getting the above error on the KSampler before VAE encode node. How do I fix this? Edit: I'm using Stability Matrix to run ComfyUI if that's a relevant information.
Hi, There was an issue raised in the AD github, they said it should have been fixed. Try to update both the AnimateDiff Evolved nodes and ComfyUI. I do not know how is that done in Stability Matrix...in ComfyUI, normally I do it via the Manager.
Hi! With the V3 lora adapter works. I am not sure if that is the way it was intended, but it does something. I have tried to use the RGB sparse but I do not manage to get it work nicely...you can also switch to version 3 and fine tune results, but obviously generations will take longer
I carefully followed and really want to get this going but getting a KSampler error. 'Given groups=1, weight of size [320, 5, 3, 3], expected input[32, 4, 96, 64] to have 5 channels, but got 4 channels instead'
I couldn't make it work :( I get this error every time: Error occurred when executing ADE_ApplyAnimateDiffModel: 'MotionModelPatcher' object has no attribute 'model_keys'
@@koalanation oh... i wrote solution here but i dont know why is it not added... so... in my situation, there was problem when i was updating AnimateDiff from Manager. To fix it remove AnimateDiff from custom nodes and get AnimateDiff from repo, then place it in Custom nodes - works for me
when I click Queue Prompt is says "TypeError: this is undefined" and nothing happen. I have all required nodes/models, and comfyui updated/restarted. can you please help?
Hi! I have never encountered this error...googling it refers to an issue with MixLab nodes...not sure if that would be your case. Maybe try to disable or uninstall custom nodes to see if there is one affecting ComfyUI.
4:54 I got to this part of the tutorial, my workflow is at 88% ksampler and then the word "reconnecting" came over the screen. terminal: [AnimateDiffEvo] - INFO - Using motion module AnimateLCM_sd15_t2v.ckpt:v2. Unloading models for lowram load. UserWarning: resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown warnings.warn('resource_tracker: There appear to be %d ' (I have a 128gb ram computer)
Hi! That looks a bit odd having 128 gb and stopping at 88%..sometimes comfyui crashes when the CPU is overloaded...try to test with less frames or smaller ones to see if that is the case.
@@koalanation when a video loader box is present you can go to Select_every_nth = if you put 1 its going to generate every frame of the video. If you choose 2 its going to generate every other frame of the video.... Since you dont have that box, what is the equivalent in your workflow?
hey i'm getting this error "Could not allocate tensor with 828375040 bytes. There is not enough GPU video memory available!" I have an AMD Rx6800Xt 16gb vram, any workaround or fix? Thanks
Hey! Not sure what the messages are with AMD, but maybe you can try first reducing the size of the latents and/or reducing the batch size. Looks like some limitation with the VRAM.
great video you skipped some steps but still detailed. Question, do we not need to change the text prompt for each randomized pic > Also why did you use Load video path node for an image ?
Hi! In principle you do not need to change it, but you can, of course. Take into account the 'tile' control net is rather strong and you cannot do big transformations. The Load video node allows you to use http addresses, but the Image load node not (at least did not work for me). That is why I use it for the randomized image.
Great video! I'm new to all that and im wondering of there is a way to keep the details. I'm trying to use a city skyline as img to video, and there for example, a lot of windows are getting removed.
That seems difficult with this method If the windows are small. Reducing the scale factor may work. Otherwise some trick with masks and controlnets may work but I have not really tried it with sparsectrl
I'm getting an error with IPAdapterUnifiedLoader, says clipvision model not found. I've downloaded a few versions and put them in my clip_vision folder but still getting the error. Is there a specific one for this node?
Sometimes with IP adapter is confusing...try to use the IP adapter model and clipvision separately (without using the unified loader) following the instructions of the IP adapter repo. I like plus and VIT-G. github.com/cubiq/ComfyUI_IPAdapter_plus?tab=readme-ov-file
@@koalanation I got this IP Adapter Clipvision error as well. What can we do there concrete? It seems, that an Ip Adapter has to be fed into the IPAdapter Unified loader left input param. But where does it come from? And why is it working without that on your machine?
@@ForChiddlersIt only needs the model as input. The preset should load IP adapter and clipvision, but the node sometimes messes up. In case of issues, it is better to use the Clipvision loader and the IPAdapter loader individually, and connect them directly to the IPAdapter Apply Node (without the Unified Loader)
Kickass video man!!! Im trying to learn cool AI like this for music viduals this is 10/10 cool. Gonna also do blender renders as bases and use ai to make them trippy. Have any tutorials for video to video?
It appears the results in the tutorial are also faded and over brightened but at the end when you show examples they look fine. Did you find a fix or was it in your post processing?
Depending on the source of image, settings, etc, the image might be too dark or too bright, as you say. There are nodes that do that. I like Image Filter adjustments. But I think it is better to use a regular video editor, it is faster and easier to use.
Hi! Thanks! I am using a RTX4090/3090 or A5000 via Runpod, which generates the video rather fast. You can try to decrease the number of frames and also the resolution of the images. Try to do interpolation with 3 frames instead of 2.