Тёмный

Video Generation w/AnimateDiff LCM, SD15 and Modelscope + any upscale! 

Stephan Tual
Подписаться 4,7 тыс.
Просмотров 6 тыс.
50% 1

Let's explore how to make some creative photorealistic AI videos using AnimateDiff LCM, SD15 in conjunction with the brand new modelscope nodes, including humans. Also includes an update on the video generator I'm building (SVD+AD+Modelscope) AND an old video restoration tool!
Workflow: flowt.ai/community/universal-...
▬ TIMESTAMPS ▬▬▬▬▬▬▬▬▬▬▬▬
00:00 Introduction (straight from the workflow output)
00:30 New Modelscope nodes with SD1.5 input!
01:57 Installation and download (nodes and models)
03:40 Important LORA information (see links!)
04:00 Let's get noodling! (detailled T2V tutorial)
06:55 Video comparison of samplers and schedulers
07:20 Important prompt information specific modelscope
09:30 First outputs and best practices on generations
10:25 What's different in this edition?
11:02 TAS vs TCS
11:27 Adding AD LCM to Modelscope+SD15
13:57 AnimateDiff details: gen2 nodes and multival/dinkinit advice
16:10 LCM kSampler settings
17:10 First results of SD15+modelscope
17:55 Application to an anime ADLCM workflow with sampler cycling
19:30 Patriotic break
19:48 Workflow breakdown and using SUPIR in place of V2V
22:06 All possible stage2 up/downscales compared
23:00 Workflows aren't apps (seriously)
23:45 How to use the full workflow (hands-on)
26:03 Final Touches and best practices
26:56 SUPIR results and things to look out for
27:19 SDXL Lightning upscale results with CNs
29:52 Results of the SDXL lightning upscaler
30:31 Restoring a 20 years old video!
32:01 Results with side by side comparison
32:52 Outro with more results on AI-generated videos
▬ LINKS REFERENCED ▬▬▬▬▬▬▬▬▬▬▬▬
Modelscope nodes: github.com/ExponentialML/Comf...
The LORA for modelscope, pre-prepped: mega.nz/file/JswlUAiT#qICwDLx...
Animateddiff evolved context window docs: github.com/Kosinkadink/ComfyU...
▬ SOCIALS/CONTACT/HIRE ▬▬▬▬▬▬▬▬▬▬▬▬
Discord: / discord
All socials: linktr.ee/stephantual
Hire Actual Aliens: www.ursium.ai/

Наука

Опубликовано:

 

26 июн 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 54   
@altriox
@altriox 3 месяца назад
Really like your video format. Starting out by building a bare bones demonstration followed by a more complicated version really helps me connect the dots. Thanks for the video.
@stephantual
@stephantual 3 месяца назад
Thank you. Means the world to me 👽
@svenvarg6913
@svenvarg6913 3 месяца назад
Oeuff!!! This is moving so fast. My head is swivelling
@WhySoBroke
@WhySoBroke 3 месяца назад
Today is a wonderful day... new videos from my fav YT channels!! Amazing tutorial!!! ❤️🇲🇽❤️
@stephantual
@stephantual 3 месяца назад
👽👽👽👽
@flisbonwlove
@flisbonwlove 3 месяца назад
Great work Stephan! Great explanations with a great sense of humor! You rock dude! 👽🖖
@ArianaBermudez
@ArianaBermudez 4 дня назад
ahahahah the owl tutorial joke killed me
@MattMosquito
@MattMosquito 3 месяца назад
Stephan, incredible cutting edge workflow, and I found your delivery to be super engaging personally. Keep up the great work!
@stephantual
@stephantual 3 месяца назад
Hey thank you so much! If I can improve anything let me know! 👍👽
@stevietee3878
@stevietee3878 3 месяца назад
Absolutely amazing work ! I thought I had learned quite a lot over the past year until I watched your video, I have so much more to catch up on.
@stephantual
@stephantual 3 месяца назад
Glad to help! 👽👽👽
@johnlenoob6951
@johnlenoob6951 3 месяца назад
Great as always ;) Garde ta rigueur extraterrestre!!!
@stephantual
@stephantual 3 месяца назад
C clair! ET pour la vie! 👽
@electronicmusicartcollective
@electronicmusicartcollective 3 месяца назад
YES Mercie for this very powerfull workflow
@stephantual
@stephantual 3 месяца назад
You are welcome! 👍👽
@AndyHTu
@AndyHTu 3 месяца назад
wow this is incredible
@motgarbob7551
@motgarbob7551 Месяц назад
this is amazing thank you
@Chad-xd3vr
@Chad-xd3vr 3 месяца назад
intro very impressive, well done
@stephantual
@stephantual 3 месяца назад
Thank you! Already working on the next one, it's pretty intense GPU-wise so I'll have a few episodes on more traditional server-side stuff with clusters and all :) 👽
@Chad-xd3vr
@Chad-xd3vr 3 месяца назад
It's still a numbers game, is there any way to direct it more like animatediff?@@stephantual
@697_
@697_ Месяц назад
Amazing video I learned alot. I have a problem though because I followed everything but I am missing some files and there is an error with the When loading the graph, the following node types were not found: IPAdapterApply IPAdapterApplyEncoded ComfyPets Nodes that have failed to load will show as red on the graph. I don't know which node to replace it with?
@neokortexproductions3311
@neokortexproductions3311 3 месяца назад
Thanks Stephan! How do you know all of this information? is this your line of work or just someone interested in A.I?
@stephantual
@stephantual 3 месяца назад
Good question - basically was semi-retired for 5 years taking care of my mother who suffered from Fronto-temporal Dementia. Spending many years near or in a care home, I noticed how so many things could have been improved for patients using AI - including, for example, generating new forms of cognitive tests (like the MMSE). This explains why there's a channel on YT with my face on it trying to raise awareness around Alzheimers etc. I have a coding background so I used Python+GFPGAN and when Comfy came out 'properly' around Jan last year I started toying around with it, it gives so much flexibility. From there, really, it's been a loooong trial and error type thing :) But I find that to be a good way to learn! Cheers! 👽👍
@neokortexproductions3311
@neokortexproductions3311 3 месяца назад
@@stephantualVery impressive! and that is very commendable of you to take care of your mother in need. Your right about how A.I can change the world and how it can help out those who will eventually need some support or improve the current modalities of the health industry. We appreciate all your help in the community!
@elislifestyle4605
@elislifestyle4605 3 месяца назад
How do you feel about ltx studio I liked your thoughts on sora?
@stephantual
@stephantual 3 месяца назад
Never used it! I imagine we'll see a lot of competition in the space as there are more and more workflow to SaaS services popping out - very exciting! 👽
@Martin-bx1et
@Martin-bx1et 3 месяца назад
I am not able to find three of the nodes: GetNode, SetNode and SUPIR_Upscale. They show up as missing when loading the workflow but aren't listed as missing through manager. Any thoughts?
@stephantual
@stephantual 3 месяца назад
Get/Set are (AFAIK) standard comfy issue, but SUPIR is github installed (I don't use the manager, because it makes you lose control over individual node branches). I have a video on how to install it at ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-Q9y-7Nwj2ic.html, also don't forget to install requirements.txt via pip, the good news is that once you done one, they all get installed the same. 👽👽
@Martin-bx1et
@Martin-bx1et 3 месяца назад
@@stephantualThanks Stephan that video helped. I found SetNode and GetNode in 'KJNodes for ComfyUI' (also from kijai) so maybe they would have been installed if I followed the steps in that video in the first place. Love your videos - they stretch me but in a good way!
@juliandekeijzer
@juliandekeijzer 3 месяца назад
Was hoping to get this to work but my video combine does not load the video formats that actually are in the video_formats folder. Instead it give me image/gif and image/webp. I foresee more trouble ahead since I am on Mac M1. Any ideas what I could do to get the right video formats loaded?
@stephantual
@stephantual 3 месяца назад
That's weird, VHS combine should provide you all formats it has available regardless of your platform in the dropdown. That said, I don't have a mac to test it. Maybe if it's reproducible post it on github.com/Kosinkadink/ComfyUI-VideoHelperSuite/issues ? Cheers!
@netstereo
@netstereo 3 месяца назад
Hi @Stephen. My PC runs on 8 GB VRAM and 16 GB RAM. Can i run thru this? Especially with the upscale. If not, can you give some tips on how to make text2vid in comfyui with my limited specs? I have never run a workflow with animatediff
@stephantual
@stephantual 3 месяца назад
T2V will run with very little vram - I think 6gb would be just fine on either these nodes or the OG ones. The V2V in Modelscope should also be fine. What takes the most vRAM are: a) supir - so use a model upscale instead, b) ultimateSD upscale if you pass it 2k frames (so limit it and tile as much as possible) c) suprisingly, FILMVFI (replace it with Rife49). It's like everything else with comfy, the more pixels or the larger the latents, the more vRam it needs :) 👽
@netstereo
@netstereo 3 месяца назад
@@stephantual Thank you, sir.
@aivrar
@aivrar 2 месяца назад
@@stephantual Hey man great tut thank you! Can we use 12 vram and lower vram command line arg to use it? Thank you again I enjoyed this.
@697_
@697_ Месяц назад
18:13 where to get this other workflow?
@benjaminaustnesnarum3900
@benjaminaustnesnarum3900 Месяц назад
The ComfyUI-0246 breaks my Comfy, for some reason. With it installed, no nodes will load at all. It's just a blank canvas.
@bigmichiel
@bigmichiel 3 месяца назад
Interesting video. I'm following along, but haven't got the same results. With all settings and models equal, it should be exactly the same right? I've double checked all settings, including the prompt. After the first 12 rendered frames, it switches scene/camera. Seems to happen with all seeds, so I'm guessing I'm overlooking a setting or something. I'm using a batch size of 24 in my empty latent image and a frame rate of 24 in video combine. Edit: Did some more testing. When rendering 48 frames at once, it switches after 24 frames. When rendering 16, it doesnt switch at all. Narrowing it down some more, it seems that if I try a batch_size of 18 or above, it will split the clip halfway through the video.
@stephantual
@stephantual 3 месяца назад
With absolutely everything identical, it would still be *slighly* different, as per the comment in the video, comfyUI is non-deterministic even when --deterministic. There's a LOT of heated debate about this, see my video about it at ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-4buE_NM1MAs.html ... i'm staying neutral in that debate 😅
@bigmichiel
@bigmichiel 3 месяца назад
@@stephantual I've recently watched a video about the samplers and (non)deterministic, and as I understood euler should be a deterministic one. I've (partially) watched the video you've linked, and I can see your setup method and results. That's a good experiment to add to my backlog to test it out for myself. Thanks for the tip.
@rluzentales
@rluzentales 3 месяца назад
@bigmichiel I get the same issue as you, where any batch_size over 16 frames will switch scenes. Have you any luck with a solution?
@kleber1983
@kleber1983 3 месяца назад
your workflow gives me this error : "Error occurred when executing KSampler (Efficient): mat1 and mat2 shapes cannot be multiplied (2464x1024 and 768x320)" Any idea how to fix this? thx.
@stephantual
@stephantual 3 месяца назад
I'm guessing you got a SD15 or SDXL chkpt loaded trying to leverage an incompatible set of CNs. The way i setup the flow in the download works fine, but if you switch model versions, make sure to adapt your CNs accordingly. The 3 ones i got listed for SDXL lightning work fine, the download links are on the comfyworkflow pages. Cheers! 👽
@kleber1983
@kleber1983 3 месяца назад
@@stephantual Yes, I´m aware of this issue and I revised all the workflow trying to find if I´m using an XL model by mistake but to no avail. I figured out tho if a disconect the model loader from the MODELSCOPE T2V LOADER, everything works fine (with a crappy quality, but works), the problem is that I´m pretty sure I´m using SD1.5 models I even tried one that I created myself way before the XD models even existed! I can´t find out what the problem could be, any idea would be much appreciated. thx. P.S. Not using any Controlnet.
@stephantual
@stephantual 3 месяца назад
@@kleber1983 Ok - fair enough - join the discord, post your edited copy on the megathread for support on this and I'll have a look for you :) tinyurl.com/URSIUM. Cheers!👽
@Fomincev
@Fomincev 2 месяца назад
not enough Vram on my 4080 12Gb. Any edvice please
@stephantual
@stephantual 2 месяца назад
It's likely SUPIR. Set the Unet to 8 bit precision, use a tiled sampler (they now have 4), or just Lanczos upscale. AD-LCM is doing all the work re: temporal consistency anyways.
@user-dj3rd4my5k
@user-dj3rd4my5k 3 месяца назад
Stephan...Are you some type of Immortal from the 7th heaven?
@stephantual
@stephantual 3 месяца назад
Well, I did get genetically mutated on the 👽mothership so there's that. On the negative side, not a huge fan of the triple tentacle they replaced my left arm with. I feel pretty conscious about it 🐙🐙🐙
@user-jx6wi6sw1f
@user-jx6wi6sw1f 3 месяца назад
Your workflow is really hard to understand how to use, even after watching your videos. Can you explain to us what each of your workflows is responsible for, and how the various switches in the switcher should be combined to prevent errors?
@a.akacic
@a.akacic 3 месяца назад
bbl.. _boots up ponyxl_
@stephantual
@stephantual 3 месяца назад
Oh! 😂😂 Yeah i had to put an NSFW tag in the neg prompt because it will inherit the properties of whatever model you push in. 👽
@aidigitalmediaagency
@aidigitalmediaagency 2 месяца назад
You are the fkn ComfyUI God. Wow, speechless. 🥂
Далее
Magnific/Krea in ComfyUI - upscale anything to real life!
1:01:02
Perfect upscales with SUPIR v2 + full comfyUI workflow
17:26
Upscale from pixels to real life
20:43
Просмотров 12 тыс.
Make INSANE AI videos: FULL Workflow
38:14
Просмотров 32 тыс.