THERE IS NO NEED TO DOWNLOAD THE FORKED EXTENSIONS ANYMORE, THE NATIVE ANIMATEDIFF & CONTROLNET WORK TOGETHER AGAIN! New Technique Showcasing The Prompt Travel Feature In AnimateDiff Here: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-gLxQEyaDBt8.html
you are the only channel that explains why the animation changes half-way with something different. I went crazy, I had no idea how to fix it, thank you!
Thanks for sharing the update on the Controlnet fix. Just to let you know to do transition, prompt travel is a new feature added to AnimateDiff WebUI version and can now do transition of different poses seamlessly too.
When disabling sd-webui-controlnet and sd-webui-animatediff, I no longer have the option of seeing the Animatediff tab at the bottom? I installed the two other things sd-webui-controlnet-animatediff and sd-webui-animatediff-for-ControlNet. Why doesn't the animatediff tab show up? :(
Curious if the "AnimateDiff v1.6.0 for Controlnet" won't show up for anyone else. I tried reinstalling both (Anidiff 1.6.0 and CtrlNet v1.4.0 AniDiff) of them multiple times, deselecting other extensions, disabling original Anidiff and controlnet, restarting cmd and pc. Only CtrlNet v1.4.0 AniDiff shows up.
For me it doesn't generate a video/gif/frames... It just does as it always does, generates individual still frames. I have done everything in the video and everything looks the same/seems fine. Does anyone know what is wrong? I am usin SD 1.5
you may need to upgrade xformers, pip and pytorch. AnimateDiff should work on SD 1.5 but if you have been running older versions of SD previously like me some components may need updating
Hello, I followed your video tutorial to use ControlNet for converting images to GIF. The generated GIF animation is not very pronounced; only the hair has a slight movement, and the overall picture is almost indistinguishable from the original. What could be the reason for this?
I followed every step but I when try to use Tile/Blur to anime it takes waaaay too long to generate like 1-2 hours for 2 seconds gif. When I use other option it only takes a couple minutes. Why does this happen?
Thanks for your video! It's amazing how fast the 2-second video is generated. It's taking me 7 minutes to do so on Mac Mini Pro with 16 GB of memory. Can you share your device memory specs? If the length goes beyond 3 sec, MPS error occurs so I am so frustrated. Thanks in advance.!!
When I use controlnet with animatediff controlnet is applied on every single frame, not just the first one. Which removed any animation whatsoever and just makes a static video.
Great tutorial. Unfortunately, I'm getting an einops.EinopsError: Error while processing rearrange-reduction pattern "(b f) d c -> (b d) f c". Input tensor shape: torch.Size([2, 4096, 320]). Additional info: {'f': 16}. Shape mismatch, can't divide axis of length 2 in chunks of 16 Any ideas on how to fix?
Hi @@ThoughtsFew I fixed this error by updating Stable-diffusion. To update add this line to your Webui-user.bat between the set commandline_args & the Call webui.bat: git pull Unfortinately, now I'm getting the error: torch.cuda.outofmemoryerror
Thank you! The update fixed this error for me. Now I'm stuck with black squares due to the NansException error. I have tried the --no half vae command line arguement solutions but combination has worked. I'm assuming maybe I need to update torch to 2.1.2 :p@@JohnLafergola
Just updated CN today and AnimateDiff stopped working. This is an issue again. Any idea how to get the old version of CN again? The forked ones are not working for me. Thanks!
When i add an image to control net ..... my video no longer animates ..... it just animates a still of the image i put in controlnet. Ive tried like 50 times and checked over everything ..... any clues ?
Followed directions. I get this RuntimeError: CUDA error: invalid configuration argument CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions. (disabling xformers fixes it but its 45 min generation time for your first example. )
@@TheAITyrant ER....Thanks for the help... After trying that, It gives me RuntimeError: CUDA error: invalid configuration argument Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions. (I'm about done trying tho xD... normal image generation works fine for me... I mostly do SDXL 90 seconds an image. ((sd1.5 images are like 10 seconds))
hi! nice video! I also had this error and your suggestion sounds in the right direction, but thats just safety and error measuring , not doing anything different . I had some good results when removing xformers. I'm on a 1080ti with 11gb vram , updated SD and torch and xformers, and disabled all extensions @@TheAITyrant
Because this tutorial was bad and it will break your build, you may now have to delete your VENV folder and let it reinstall because you will continue to get errors.
This tutorial was a quick fix for an update that made AnimateDiff & Controlnet unusable. They have since been updated & now work together. There is no need for the forked extensions anymore.
Thanks for the tutorial. The image to image one doesn't work for me. For me it seems like it merges both pictures and then I only get a flickering animation.
The integration of ControlNet and AnimatedDiff is awesome, though this has such a high failure rate. It crashes every 10 gens, or I just get colorful static. Being limited back down to 32 frames isn't great. Per notes that isn't going to get updated anytime soon. I think I might just have to learn ComfyUI to get around this.
Can you elaborate on that? OR rather, simplify that down so an idiot can understand? Do i no longer need the two extensions you recommended in this video? What do i install instead? For me, i'm not even getting the two animatediff for control and controlnet for animatediff options in my UI, i only have the standard controlnet options, which is weird, because i disabled that in the extensions, like you said to. I don't know what's going on. @@TheAITyrant
@@wholeness Nevermind, i realized my git pull was not updating properly, and it didn't know what branch i was on, so i had to run "git checkout master" first, then git pull, then it updated to SD version 1.6 correctly, and the extensions now work fine.
Hey I followed everything and when I generate it gives me a grid image, all the individual frames ( they're not coherent for some reason) and it never generates a gif... I can't get it to work. I searched everywhere. does anyone have an idea on how to fix that?
I had this issue as well, kept getting a grid output in txt2vid/txt2gif while using photon_v1 as the checkpoint. No issues with the img2vid/img2gif though... pops out a gif like no problemo. So easy work around for me was to generate initial pic in txt2img then send it to img2img to do img2vid
One of these extensions refuses to show up in my UI. I have the ControlNet v1.1.410 for AnimateDiff showing but not the AnimateDiff 1.6 for controlnet menu, it's missing.
Nevermind, i realized my git pull was not updating properly, and it didn't know what branch i was on, so i had to run "git checkout master" first, then git pull, then it updated to SD version 1.6 correctly, and the extensions now work fine.
Not necessarily. With A1111, I've heard of people getting AnimateDiff to work with 8GB. Worst case scenario, you can use ComfyUI, I've heard you need a minimum of 5GB. Try passing the argument "--medvram" in your webui batch file if you are running into CUDA errors
if i could give a recommendation, comfyui might be more stable in terms of breaking changes. with that, it is plug-and-play with all kinds of different functionalities (animations/pics/upscale/controlnet/etc), and you can copy in and use complex graphs without knowing how to set them up. it is not as hard as some of the big graphs make it look, basic graphs only need 4 nodes or so.
Just outputting a png.. no mp4, gif... Errors on command line AttributeError: 'AnimateDiffProcess' object has no attribute 'text_cond' AttributeError: 'AnimateDiffProcess' object has no attribute 'text_cond' AttributeError: 'AnimateDiffProcess' object has no attribute 'text_cond' repeats while building .... Ends with: AttributeError: 'NoneType' object has no attribute 'save_infotext_txt'
Thank you. For the transition, you put the last frame of the animation you made into the 1st controlnet. Then, if you want to transition that animation into the next, put the first frame of the animation you want to transition to into the 2nd controlnet. So when you generate, you are creating an animation that ends at the first GIF & leads into the 2nd GIF! Hopefully that makes sense. Join the Tyrant Empire if you want to DM there as well.
i still dont know why this method can work.I try to use two tile controlnet to control.but the GIF still merge two tile image together,haven't seen the transition result.@@TheAITyrant
So strange....It just flat out does not function when I do these exact steps. I always get an error about missing keys and then it just generates still frames and no gif. But also the updated extension doesn't even show up if you remove the old extension instead of just disabling it. I appreciate your tutorial, but it seems like animatediff is very temperamental and refuses to work for several of us.
The original extension works perfectly with controlnet now, so there is no need to use the forked extensions. Have you tried downloading from here?: github.com/continue-revolution/sd-webui-animatediff
True, kind of funny watching him copy the DavideAlidosi branch and gives link to thiswinex one.. no wonder the how many " doesn't work" comments lmao P.S. All of you should just switch to forge ver...
@@AndyHTu You mean RTX 3090? Yeah, I don't have that. Also, if I were to get one, is it possible to replace the GTX 1660 SUPER that I have in my gaming desktop with an RTX 3090 without losing anything on computer. I ain't paying thousands of bucks on a whole new computer just for that one part that I need, unless it's a gaming laptop.
As a man with a "little" hearing problems, i find the music you use is really disturbing, so ended up just muting you and read the sub, so plz plz, for the love of the worms, stop playing music :)
- I believe we are meant to be like Jesus in our hearts and not in our flesh. But be careful of AI, for it is just our flesh and that is it. It knows only things of the flesh (our fleshly desires) and cannot comprehend things of the spirit such as peace of heart (which comes from obeying God's Word). Whereas we are a spirit and we have a soul but live in the body (in the flesh). When you go to bed it is your flesh that sleeps but your spirit never sleeps (otherwise you have died physically) that is why you have dreams. More so, true love that endures and last is a thing of the heart (when I say 'heart', I mean 'spirit'). But fake love, pretentious love, love with expectations, love for classic reasons, love for material reasons and love for selfish reasons that is a thing of our flesh. In the beginning God said let us make man in our own image, according to our likeness. Take note, God is Spirit and God is Love. As Love He is the source of it. We also know that God is Omnipotent, for He creates out of nothing and He has no beginning and has no end. That means, our love is but a shadow of God's Love. True love looks around to see who is in need of your help, your smile, your possessions, your money, your strength, your quality time. Love forgives and forgets. Love wants for others what it wants for itself. Take note, love works in conjunction with other spiritual forces such as faith and patience. We should let the Word of God be the standard of our lives not AI. If not, God will let us face AI on our own and it will cast the truth down to the ground, it will be the cause of so much destruction like never seen before, it will deceive many and take many captive in order to enslave them into worshipping it and abiding in lawlessness. We can only destroy ourselves but with God all things are possible. God knows us better because He is our Creater and He knows our beginning and our end. Our prove text is taken from the book of John 5:31-44, 2 Thessalonians 2:1-12, Daniel 7-9, Revelation 13-15, Matthew 24-25 and Luke 21. Let us watch and pray... God bless you as you share this message to others.
Nice tools and way to produce videos and images. I am looking for a way to produce sprites sheets with Automatic 1111 and controlnet if needed, to have a consistent character moving or doing other actions to use in other softwares. That would be good
Hello, pls did it ever happen that the AnimateDiff for ControlNet tab wont show for you? I followed every step, installed all the stuff but for some reason, the tab stays invisible. I did the settings, to hide and show only the stuff you suggested. But still the tab is not visible. Pls any suggestion on what I could do?
will need the --medvram argument and even then it will take a bit to generate 512x512, dont even think about going higher. the comfyui version should work quite a bit better though as it's a lot easier on your gpu. I've used both versions and the comfyui version is the more efficient and more stable version. I had a 2070 8gb but recently upgraded to a 3080 10gb and i can barely run this in A1111. Hope this helps :)