Тёмный

ComfyUI AnimateDiff Prompt Travel: ControlNets and Video to Video!!! 

c0nsumption
Подписаться 4,2 тыс.
Просмотров 26 тыс.
50% 1

This is a fast introduction into ‪@Inner-Reflections-AI‬ workflow regarding AnimateDiff powered video to video with the use of ControlNet.
You can download the ControlNet models here:
huggingface.co...
The workflow file can be downloaded from here:
drive.google.c...
The model (checkpoint) downloaded for this tutorial series are here:
civitai.com/mo...
The VAE used can be downloaded from:
huggingface.co...
The motion_modules and motion_loras can be found on the original AnimateDiff repo where you will be offered different sources to download them from:
github.com/guo...
Or here's a quick link to civitai:
civitai.com/mo...
civitai.com/mo...
Socials:
x.com/c0nsumpt...
/ consumeem

Опубликовано:

 

23 сен 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 184   
@yoyo2k149
@yoyo2k149 11 месяцев назад
Tested on AMD RX6800XT (Ubuntu22.04 + ROCm 5.7). it works flawlessly and stay close to 12Go of VRAM. Really helpful, thanks a lot.
@c0nsumption
@c0nsumption 11 месяцев назад
Awesome. Will pin this for others. Mind giving a short guide on the r/animatediff subreddit? :)
@miaoa7414
@miaoa7414 11 месяцев назад
@@c0nsumption When loading the graph, the following node types were not found: BatchPromptSchedule Nodes that have failed to load will show as red on the graph.😭
@yoyo2k149
@yoyo2k149 11 месяцев назад
@@c0nsumption I will try to post a small guide before the end of the week-end. :)
@Inner-Reflections-AI
@Inner-Reflections-AI 11 месяцев назад
Nicely Done!
@c0nsumption
@c0nsumption 11 месяцев назад
Everyone, this is the original creator of this workflow. Amazing artist/creative. Please follow them! 🙏🏽
@EM7-k8j
@EM7-k8j 11 месяцев назад
Hard! Thank you very much for paying for it all the way! It's a pity that I brushed it here before going to bed and have to wait until tomorrow to practice.
@wholeness
@wholeness 11 месяцев назад
Bro we on this journey together. Keep goin!
@Andro-Meta
@Andro-Meta 11 месяцев назад
Converting the pretext, and how to do that completely blew my mind and opened doors to understanding what I could do. Thank you.
@ronnykhalil
@ronnykhalil 11 месяцев назад
yea baby (edit: this is straight up the most valuable 10 minutes ive watched on RU-vid in a while, exactly the signal I needed amidst all the noise regarding comfy and diff. You explained it really well and clear. Thank ye kindly!
@Copperpot5
@Copperpot5 11 месяцев назад
Nice job on these of late. In general I have a hard time watching video tutorials w/ people on screen talking - but you're hitting all the right notes on these so far.......Haven't -wanted- to bother w/ Comfy, but have definitely admired the generations some have been sharing. Thanks for making well timed / friendly tutorials. Stick w/ it and you'll def build a good/active channel. Thanks!
@c0nsumption
@c0nsumption 11 месяцев назад
Thanks for the positivity hey 👏🏽
@SkyOrtizCreative
@SkyOrtizCreative 11 месяцев назад
Love your vids bro!!! I know it takes a lot of work to makes these, really appreciate your efforts. 🙌
@c0nsumption
@c0nsumption 11 месяцев назад
Thanks for understanding 🧍🏽‍♂️ Legit takes so much time 😣 lol
@victorhansson3410
@victorhansson3410 11 месяцев назад
damn, glad i saw your channel recommended on reddit. fantastic video - calm, concise and well made!
@c0nsumption
@c0nsumption 11 месяцев назад
Thanks dude 🙏🏽 Happy to help elevate and educate the community
@colaaluk
@colaaluk 6 месяцев назад
great video
@JaredVBrown
@JaredVBrown 7 месяцев назад
Very helpful and approachable tutorial. Thanks!
@UON
@UON 11 месяцев назад
Exciting! I hope this helps me figure out how to do a much longer vid2vid without running out of vram
@c0nsumption
@c0nsumption 11 месяцев назад
I mention a note on VRAM. Can lower the image size so a smaller resolution and then upscale later. How much VRAM do you have???? Have you considered using RunPod? They have a preset comfyUI template
@calvinherbst304
@calvinherbst304 7 месяцев назад
Thank you. Excellent tutorial :) Keep them coming, subbed!
@58gpr
@58gpr 11 месяцев назад
I was waiting for this one! Thanks mate & keep 'em coming :)
@c0nsumption
@c0nsumption 11 месяцев назад
No worries 😉 Figured it’d be a quick way to introduce ControlNets but still give a lot of y’all what you’re waiting for 🧍🏽‍♂️
@keagoaki
@keagoaki 9 месяцев назад
straight to the point and clear, nice to follow,no music is perfect. i can choose my own background if needed. thanks a lot you just made me a fortune haha
@aminshallwani9369
@aminshallwani9369 11 месяцев назад
Thanks for the video, very helpful. Well done😍
@leretah
@leretah 11 месяцев назад
awesome, thank you. Im really appreciated
@c0nsumption
@c0nsumption 11 месяцев назад
No worries. More on the way. Just super busy with work sorry 🙏🏽
@edkenndy
@edkenndy 11 месяцев назад
Awesome! Thanks for sharing the resources.
@c0nsumption
@c0nsumption 11 месяцев назад
Trying to get everyone up to speed on all the amazing workflows available 🙏🏽
@yuradanilov5244
@yuradanilov5244 10 месяцев назад
thanks for the tutorial, man! 🙌
@LearningVikas
@LearningVikas 6 месяцев назад
Thanks worked finally❤❤
@TheJPinder
@TheJPinder 7 месяцев назад
good stuff
@ekke7995
@ekke7995 10 месяцев назад
this is it!!
@digidope
@digidope 11 месяцев назад
Thanks! Straight to the point!
@c0nsumption
@c0nsumption 11 месяцев назад
Yes indeed. Hard to keep it that way with such complex topics but I’m trying!
@Ekopop
@Ekopop 9 месяцев назад
that my friend is a very nice video, thanks a lot, I'll follow your stuff
@haydnmann
@haydnmann 11 месяцев назад
this is sick, nice work dude. sub'd
@BrandonFoy
@BrandonFoy 11 месяцев назад
Whoa! This is awesome, thanks for sharing your workflow. I haven’t used ComfyUI - just been in A1111. Can you recommend tutorials for Comfy? Or any you’ve made that’ll be a solid start to start learning this method? Thank you!!
@c0nsumption
@c0nsumption 11 месяцев назад
This was by me and is a great way to get started. It’s part of the playlist this current video is in: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-SGivydaBj2w.htmlsi=MDwuANfnq6W_Wzul Also this actually isn’t my workflow, it’s the work of @Inner-Reflections-AI here on RU-vid! I did make some modifications though to make things a bit easier :)
@BrandonFoy
@BrandonFoy 11 месяцев назад
@@c0nsumption oh man, thank you so much!!!!! 🙌🏾🙌🏾🙌🏾
@BrandonFoy
@BrandonFoy 11 месяцев назад
@@c0nsumption yeah, this is exactly what I’m looking for!! Awesome thanks again!
@banzai316
@banzai316 11 месяцев назад
Good work! Thanks! 👏
@aoi_andorid
@aoi_andorid 10 месяцев назад
This video will help many creators. Please have a place to pay for coffee.
@c0nsumption
@c0nsumption 10 месяцев назад
🥹 Will set up soon. I love y’all. Thanks for all the love 🙏🏽 I set up a patreon, will be sharing soon. Also considering setting up subscriptions on X
@danielvgl
@danielvgl 6 месяцев назад
Great!!!
@ywueeee
@ywueeee 11 месяцев назад
cool vid, you'll might the best animatediff channel now, what's coming next?
@c0nsumption
@c0nsumption 11 месяцев назад
IPAdapter, ControlNet Keyframes, Frame Interpolation, Refiner and Upscaling, amongst others! Also Hotshot-XL tutorials. Thanks btw, I appreciate ya.
@ywueeee
@ywueeee 11 месяцев назад
@@c0nsumption 3 or 5 image interpolation as in with start and end frames please
@mikberg1824
@mikberg1824 9 месяцев назад
Really good tutorial,thank you!
@francaleu7777
@francaleu7777 11 месяцев назад
perfect tuto ! Thanks a lot !
@MrPlasmo
@MrPlasmo 11 месяцев назад
very helpful as always thanks. Is there a way to make a "preview" video frame node so that you can view the progress of the render before it is completed? This way one could cancel the render if it looks terrible or not the way you want it without wasting render time. This was one of the nice things about deforum that saved me a lot of time
@lovisodin8658
@lovisodin8658 11 месяцев назад
just use a fixed seed, and in the node 'load video upload', just change "select every nth", to 20 for example if you want 6 images preview
@victorvaltchev42
@victorvaltchev42 11 месяцев назад
What was the size of the video in the end? Because you showed 1024 576 in the beginning. Is that the resolution in the end as well? Also how do you load other formats of video, I only have webp and gif?
@c0nsumption
@c0nsumption 11 месяцев назад
Yes, that's what dictates the output resolution. Have upscaling coming up soon but have two jobs so very limited time!
@victorvaltchev42
@victorvaltchev42 11 месяцев назад
@@c0nsumption Great content man! Thanks for the answer! I was a long time Automatic1111 user but the past weeks with the advances of animatediff in comfyui I'm definetely switching!
@leandrogoethals6599
@leandrogoethals6599 5 месяцев назад
nice tutorial, have u found a way that u can upload a 3 min video in one piece into the VHS load video node?
@Elliryk_
@Elliryk_ 11 месяцев назад
Great Video my friend!! Elliryk 😉
@c0nsumption
@c0nsumption 11 месяцев назад
Ahhhhhhhhhh shiiiii 🧍🏽‍♂️ Enjoy the video my guy. Excited to see what you cook up 🍳🥘⏲️
@l1far
@l1far 11 месяцев назад
I use run diffusion and can't load your workflow in json( can you upload the pic too maybe that can fix it?
@nelson5298
@nelson5298 9 месяцев назад
Thanks for ur sharing. I really learn a lot. Quick question... how do I change model's cloth and keep the new cloth can prerform consistently. I type in sweater, but some frame will change sweater into tank top...
@killbadmashia9225
@killbadmashia9225 Месяц назад
I get an error at the Ksampler processing node! "AttributeError: 'NoneType' object has no attribute 'shape'"
@Jovis.x
@Jovis.x 20 дней назад
error KSampler mat1 and mat2 shapes cannot be multiplied (2464x2048 and 768x320)
@DefinitelyNotMike
@DefinitelyNotMike 10 месяцев назад
This is so fucking cool and it worked with no issues! Thanks!
@voytakaleta
@voytakaleta 11 месяцев назад
Awesome! I have one question, how can I install / connect ffmpeg to comfyUI. I get this error "[AnimateDiffEvo] - WARNING - ffmpeg could not be found. Outputs that require it have been disabled". Thank you very much!
@JMcGrath
@JMcGrath 11 месяцев назад
I have same issue
@voytakaleta
@voytakaleta 11 месяцев назад
ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-qSlxv68Xpkw.html@@JMcGrath
@Spajra-music
@Spajra-music 11 месяцев назад
followed this all the way through and at the end my video output was just black. any suggestions?
@bowaic9467
@bowaic9467 2 месяца назад
I don't know how to fix this problem. 'ControlNet' object has no attribute 'latent_format'
@Spajra-music
@Spajra-music 11 месяцев назад
crushing bro
@El__ANTI
@El__ANTI 10 месяцев назад
Error occurred when executing CheckpointLoaderSimpleWithNoiseSelect ...
@Csarmedia
@Csarmedia 11 месяцев назад
the ebsynth of comfyui
@c0nsumption
@c0nsumption 11 месяцев назад
Honestly better than EBSynth. Cause it works on every frame. Only reason why this changes is cause of prompt travel. Otherwise the first scene would have stayed 👍🏽
@norvsta
@norvsta 11 месяцев назад
@@c0nsumption so cool. I faffed around for a coupla days trying to install Win 10 just to run ebsynth, now I don't have to bother. Thanks for the tut 🙌
@samshan9321
@samshan9321 10 месяцев назад
really helpful tutorial, thx
@yuxiang3147
@yuxiang3147 11 месяцев назад
Awesome video! Do you know how you can combine openpose and depth and lineart together to improve the results?
@c0nsumption
@c0nsumption 11 месяцев назад
Yeah, I’ll make a follow up video for multiple ControlNets
@yuxiang3147
@yuxiang3147 11 месяцев назад
@@c0nsumption Nice! Looking forward to it, you are doing awesome stuff man keep it up!
@c0nsumption
@c0nsumption 11 месяцев назад
@@yuxiang3147 thanks for the positivity 🙏🏽
@alishkaBey
@alishkaBey 9 месяцев назад
Great tutorial bro ! Could you make a video about morphing videos with Ipadapters?
@kaleabspica8437
@kaleabspica8437 6 месяцев назад
what do i have to do if i want to change the look of it ? since yours is closer to anime style i want to make it to realism or sci-fi etc
@OffTheHorizon
@OffTheHorizon 5 месяцев назад
Im using Ksampler, but it takes 9 minutes for 1 of the 25 samples. Which is obviously extremely slow. Im working on a macbook m1 Max, do you have any tips on making it quicker?
@RenoRivsan
@RenoRivsan 7 месяцев назад
can you show how to remove animatediff from this workflow... i dont want my video to change style
@mulleralmeida4844
@mulleralmeida4844 5 месяцев назад
Starting to learn ComfyUI, when I click on Queue Prompt, my computer takes a long time to process the KSampler node. I'm using a MacBook Pro 14 M2 PRO, is it normal for it to take so long?
@vtchiew5937
@vtchiew5937 11 месяцев назад
thanks! got it working after a few tries, but i realize the prompts are not really working (at least I don't see them "travelling"), it seems the whole prompts are taken into consideration instead. do you have similar issues? i see that the default workflow has 4 prompts, but in your generated video at least it traveled from green lush to wintery storm, whereas mine always started with wintery storm, and remained like that throughout the video.
@c0nsumption
@c0nsumption 11 месяцев назад
Depends on various factors: key frame distance, seed, cfg, sampler, inputs, etc etc. That’s the artistic process my friend, fiddle with it all. This was a quick output to get everyone involved. I’m just really busy testing all the new tech, working, and trying to formulate constructive tutorials for everyone to tag along.
@vtchiew5937
@vtchiew5937 11 месяцев назад
@@c0nsumption thanks for the reply bro, been fiddling with it since then, great tutorial~
@victorvaltchev42
@victorvaltchev42 11 месяцев назад
Top!
@bowaic9467
@bowaic9467 2 месяца назад
Do u know what happen to this error? Error occurred when executing CheckpointLoaderSimpleWithNoiseSelect: 'model.diffusion_model.input_blocks.0.0.weight' File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff odes_extras.py", line 52, in load_checkpoint out = load_checkpoint_guess_config(ckpt_path, output_vae=True, output_clip=True, embedding_directory=folder_paths.get_folder_paths("embeddings")) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\AI\ComfyUI_windows_portable\ComfyUI\comfy\sd.py", line 511, in load_checkpoint_guess_config model_config = model_detection.model_config_from_unet(sd, diffusion_model_prefix) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\AI\ComfyUI_windows_portable\ComfyUI\comfy\model_detection.py", line 239, in model_config_from_unet unet_config = detect_unet_config(state_dict, unet_key_prefix) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\AI\ComfyUI_windows_portable\ComfyUI\comfy\model_detection.py", line 120, in detect_unet_config model_channels = state_dict['{}input_blocks.0.0.weight'.format(key_prefix)].shape[0] ~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@philspitlerSF
@philspitlerSF 5 месяцев назад
I don't see a link to download the workflow
@zweiche
@zweiche 11 месяцев назад
i really appreciate for this guide. this will help me alot! however i have 1 problem maybe you can help me with i have done everything right. i see frames from video and i see controlnet output with lines. however after ksampler my gif and image outputs are all blackscreen. what do you think my problem could be.
@JimDiMeo
@JimDiMeo 11 месяцев назад
Hey man - love the tutorials!! Where do you add different video creation formats - I only have gif and webp - Thx
@c0nsumption
@c0nsumption 11 месяцев назад
🤔 should be more. Search for the VHS Video Combine node in your ComfyUI and try that.
@JimDiMeo
@JimDiMeo 11 месяцев назад
@@c0nsumption yes! Found that last night. Thx for the reply through.
@risewithgrace
@risewithgrace 11 месяцев назад
Thanks! I downloaded this workflow but the output only has formats for image/gif, or image/webp, even though I am inputting video. There is no video/h264 setting in the dropdown. Any idea how I can add that?
@c0nsumption
@c0nsumption 11 месяцев назад
Replace the output node with “VHS Video Combine” node. You can double click in the interface and search for it.
@benjaminbardouparis
@benjaminbardouparis 11 месяцев назад
Wow. Huge thanks for this! Is it possible to use a SD XL model for generating a painting style? I’d like to use this one and I don’t know if it’s possible with your workflow. Btw, many thanks !!
@c0nsumption
@c0nsumption 11 месяцев назад
You can use hotshotxl: civitai.com/articles/2601/guide-comfyui-sdxl-animation-guide-using-hotshot-xl-an-inner-reflections-guide
@benjaminbardouparis
@benjaminbardouparis 11 месяцев назад
Thanks!
@eraniopetruska5701
@eraniopetruska5701 10 месяцев назад
Hi! Did you manage to get it running? @@benjaminbardouparis
@Csarmedia
@Csarmedia 11 месяцев назад
The worklflow file is giving me an error: TypeError: Cannot read properties of undefined (reading '0')
@ucyuzaltms9324
@ucyuzaltms9324 11 месяцев назад
i love the output
@itsjaysenofficial
@itsjaysenofficial 10 месяцев назад
Will it work for a macbook pro M1 with 16gb or ram??
@DimiArt
@DimiArt 6 месяцев назад
Weird im getting preview images from the upscaler node and the lineart images from the controlnet, but im not getting any actual output results.
@DimiArt
@DimiArt 6 месяцев назад
ok i realized my checkpoint and my VAE were set to the ones in the downloaded workflow and i had to set them to the ones i actually had downloaded instead. My bad
@kaleabspica8437
@kaleabspica8437 6 месяцев назад
do you know how to change the look of it.
@DimiArt
@DimiArt 6 месяцев назад
@@kaleabspica8437 change the look of what
@GamingDaveUK
@GamingDaveUK 11 месяцев назад
Nice, may have to try this after work. is it the same process if you want to use more uptodate models (cant go back to 1.5 after using SDXL lol)
@c0nsumption
@c0nsumption 11 месяцев назад
I’ve tested the HotshotXL workflow. Currently SD15 is doing a lot better. But InnerReflections is creating some magnificent pieces using it and is supposedly about to share his workflow 🧍🏽‍♂️
@AI-nsanity
@AI-nsanity 10 месяцев назад
I don't have the option for mp4 output do you have any idea why ?
@c0nsumption
@c0nsumption 10 месяцев назад
Change output node to VHS Video Combine. I believe that solves it
@speaktruthtopower3222
@speaktruthtopower3222 11 месяцев назад
is there wa way to point to different directories so we don't have to re'download models, lors and others files.
@c0nsumption
@c0nsumption 11 месяцев назад
I use ComfyUI as my base for a there repos so not sure. But try here: github.com/comfyanonymous/ComfyUI/discussions/72
@speaktruthtopower3222
@speaktruthtopower3222 11 месяцев назад
@@c0nsumption I figured it out. just change the root directory and point it to your SD in the "extra_model_paths.yaml" file.
@antonradacic2374
@antonradacic2374 11 месяцев назад
ive set everything up, but for some reason i get error at ksampler step "Error occurred when executing KSampler: 'ModuleList' object has no attribute '1'"
@c0nsumption
@c0nsumption 11 месяцев назад
DM me on Twitter the actual error message and a screenshot of the nodes. Too vague to answer. Either that or post on r/animatediff subreddit
@SuperDao
@SuperDao 10 месяцев назад
Can you make a tutorial on how to upscale the render ?
@fillill-111
@fillill-111 11 месяцев назад
Thank you for this tuttorial! I'm using colab version and i get tottally black result pictures and video, could you give me a hint how can i fixe it? thx But most of the time i get this issue ..SD model must be either SD1.5-based for AnimateDiff or SDXL-based for HotShotXL. Need help..=\
@c0nsumption
@c0nsumption 11 месяцев назад
Are you using an SDXL model or SD1.5? Other models don’t work for animatediff/hotshot. Can you lmk what model you are using and I’ll do some research
@luclaura1308
@luclaura1308 11 месяцев назад
How would you go around adding a Lora (not a motion one) to this workflow? I tried adding a Load Lora after Load checkpoint, but I'm getting black images.
@c0nsumption
@c0nsumption 11 месяцев назад
This tutorial: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-ElmyIsvMblE.htmlsi=Kk_dWXxGELq-Kemy
@luclaura1308
@luclaura1308 11 месяцев назад
@@c0nsumption Thanks!
@Beedji
@Beedji 11 месяцев назад
Hey man, great tutorial ! I have an error message that pops out however, it says "Control type ControlNet may not support required features for sliding context window; use Control objects from Kosinkadink/Advanced-ControlNet nodes." which is weird since I have Kosinkadink's model installed. Have you experienced this error as well ?
@Beedji
@Beedji 11 месяцев назад
Ok I think i've found the problem. I wasn't using the same VAE that you (I was using a SD1.5 pruned one) and now that I installed the same than you (Berrysmix) it seems to work. No idea what difference this makes, but we'll see! haha
@aaronv2photography
@aaronv2photography 11 месяцев назад
You made a video (I think) about unlimited animatediff length animations. How would we incorporate that into this workflow so we can go past the 120 frame limit?
@c0nsumption
@c0nsumption 11 месяцев назад
I would imagine just make sure you add in more than 120 frames and increase the max frames on the ‘BatchPromptSchedule’ node past 120. If you don’t include enough frames I’m assuming the generation will just continue prompts based form the point of missed frames but who knows 🤷🏽‍♂️ Test it out, will probably make some cool stuff
@koalanation
@koalanation 11 месяцев назад
Great video! Just you know: the models in huggingface are free to download, no need to open any account
@c0nsumption
@c0nsumption 11 месяцев назад
Some require sign especially upon initial release. It’s all what the developers dictate when posting. Like when SDXL dropped you had to have a huggingface account to download
@nilshonegger
@nilshonegger 11 месяцев назад
Thank you so much for sharing your workflow! Is there a way to bypass the VAE nodes in order to use it with Models that don't require a VAE (such as Dreamshaper, EpicRealism)?
@c0nsumption
@c0nsumption 11 месяцев назад
Plug the vae from your checkpoint loader node into any slot that requires a vae
@frustasistumbleguys4900
@frustasistumbleguys4900 10 месяцев назад
hey, why i got noise with artefact on my output? i follow you
@c0nsumption
@c0nsumption 10 месяцев назад
DM me over X or instagram. Send me an example image.
@terencechen5857
@terencechen5857 11 месяцев назад
have you tried this workflow + IP adapter, it will increase memory significantly
@c0nsumption
@c0nsumption 11 месяцев назад
Yeah, it’ll pull around 17 GB of VRAM. I have a Runpod tutorial coming for those lacking. Took a lot of debugging and studying but I’ve ironed out the bugs and got it figured out. Then I can drop all the remaining workflows and tutorials 🙏🏽 This way if anyone’s lacking I can redirect them to run pod where they pay as they go and for good cards rather than Google colab which imo really isn’t worth it.
@terencechen5857
@terencechen5857 11 месяцев назад
it's more than 17GB in my case, depending on how many frames to be generated, however looking forward to see your update, thanks @@c0nsumption
@terencechen5857
@terencechen5857 11 месяцев назад
I did some updates(comfyui, custom nodes like ipadapter etc.), the usage of VRAM is down to 11GB with a resolution of 576 * 1024 😂@@c0nsumption
@lanvinpierre
@lanvinpierre 11 месяцев назад
can you do cli prompt in comfyui? great tutorial btw!
@c0nsumption
@c0nsumption 11 месяцев назад
Sorry, confused about what you’re asking. Are you asking if you can do prompt travel?
@lanvinpierre
@lanvinpierre 11 месяцев назад
the one where where you used 3 different images to help with the animation "frame 0 0001, frame 8 0002' im not sure what its called but can that be done thru comfyui or should it be done like your other tutoriial? @@c0nsumption
@looneyideas
@looneyideas 11 месяцев назад
Can you use RunPod or does it have to be local?
@c0nsumption
@c0nsumption 11 месяцев назад
Runpod has a ComfyUI template
@jiananlin
@jiananlin 11 месяцев назад
how to apply more than 1 controlnets?
@hatakeventus
@hatakeventus 11 месяцев назад
does this work with AMD RX 6700??
@VJSharpeyes
@VJSharpeyes 11 месяцев назад
The Node 'realistic lineart' node is always missing when loading your CSV. Any tips of what I could have missed? I am warned about "LineArtPreprocessor" missing and then in the install manager I only see Fannovel16s which is already installed.
@VJSharpeyes
@VJSharpeyes 11 месяцев назад
Oh hang on. There is an abandoned repo that looks like it contains it.
@jorgecucalonf
@jorgecucalonf 11 месяцев назад
Same issue here. Console gives me this: (IMPORT FAILED): C:\AI\ComfyUI\ComfyUI\custom_nodes\comfyui_controlnet_aux
@jorgecucalonf
@jorgecucalonf 11 месяцев назад
Managed to get it working by reverting the comfyui_controlnet_aux folder to an older commit. Otherwise we must wait for the owner of the repository to update the it with a fix.
@jorgecucalonf
@jorgecucalonf 11 месяцев назад
That was quick. It's fixed now :D
@c0nsumption
@c0nsumption 11 месяцев назад
Good job getting it working. If I have some spare time today or this week I’ll try to research.
@ehsankholghi
@ehsankholghi 7 месяцев назад
i upgraded to 3090ti 24gig.how much cpu ram i need to do video to video SD? I have 32gig
@c0nsumption
@c0nsumption 7 месяцев назад
Should be fine with that. Dont upgrade your ram till you hit your bottleneck. If your doing really really long sequences it’ll bottleneck but even then you can just split them up into smaller chunks
@ehsankholghi
@ehsankholghi 7 месяцев назад
@@c0nsumption thanks so much.is it possible to make a video with like 1000 frame(1000 png) whit ur workfllow? i got this error after 1.5 hours rendertime: numpy.core._exceptions._ArrayMemoryError: Unable to allocate 6.43 GiB for an array with shape (976, 1024, 576, 3) and data type float32
@leretah
@leretah 11 месяцев назад
Yesterday all is ok and today I have this error: Error occurred when executing KSampler: unsupported operand type(s) for //: 'int' and 'NoneType' File "C:\Users\lenin\Downloads\ComfyUI_windows_portable\ComfyUI\execution.py", line 153, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) Please help me, learn this is really frustrating many times, but Im love it!!!
@c0nsumption
@c0nsumption 11 месяцев назад
Sounds like you have the wrong data in your ksampler somewhere. Try reloading the workflow from scratch. Consider posting your issue in the r/animatediff subreddit
@Oscilatii
@Oscilatii 11 месяцев назад
Hello! Used your tutorial and workflow but duno why, my video is crap :) The background is modified and is cool, but my face is still like original video with some modified colors. If I want to make my face a robot for example, just won't work... With openpose instead of lineart, i got great results but is missing mouth movement when I speak If I use same prompt in img2img, results are amazing
@c0nsumption
@c0nsumption 11 месяцев назад
You can adjust controlnet weight, try different controlnets or try mixing them. I’ll drop a multi controlnet video soon
@Oscilatii
@Oscilatii 11 месяцев назад
@@c0nsumption thanks for your answer. One of my problem was that I use a realistic model :) now everything is ok. Thx again for this tutorial, rly helped me
@AIPixelFusion
@AIPixelFusion 11 месяцев назад
How are you only using 11GB of VRAM? Mine goes above 24GB and has to use non-GPU RAM...
@c0nsumption
@c0nsumption 11 месяцев назад
How much VRAM do you have? How many frames are you using? What is the size of your frames? What size are you upscaling them too? How long is your generation? What do you have running in the background on your computer? Going above 24GB of VRAM has to be for a reason.
@AIPixelFusion
@AIPixelFusion 11 месяцев назад
@@c0nsumption I have: 24GB VRAM 30 frames video frame size is 720x1280 (should I be lowering it to 576x1024?) values for upscaler: 576x1024 (are these ignored if smaller than the video frame size?)
@c0nsumption
@c0nsumption 11 месяцев назад
@@AIPixelFusion 🤔 What the hell. Can you send me a photo of your node network over X? I don’t understand how you’re using that much vram if your upscaler is at 576 by 1024. How long is your actual input video/ amount of frames? Did you make sure to cap them like I did? (Where I limited the amount of frames it would process)
@Syzygyyy
@Syzygyyy 11 месяцев назад
same issue@@c0nsumption
@ywueeee
@ywueeee 11 месяцев назад
bro it's been a week, where's some new vids, eagerly waiting
@c0nsumption
@c0nsumption 11 месяцев назад
lol 😂 Been working on a Runpod setup video for people who don’t have compute power. Was pretty difficult to figure it all out but I got it. Posting in the next 30 min to an hour. Workflow vids now coming since I got that out of the way 🧍🏽‍♂️
@ywueeee
@ywueeee 11 месяцев назад
@@c0nsumption hope new workflows doesn't always involve runpod now on, would love to get it working locally always
@arkelss4
@arkelss4 11 месяцев назад
Can this alsowork with automatic1111?
@c0nsumption
@c0nsumption 11 месяцев назад
No idea but most likely not. Best to start learning the newer tools and growing out of Auto1111. Developer experience isn’t the greatest on Auto so most development on state of the art tech is happening on ComfyUI and other repos.
@dnvman
@dnvman 11 месяцев назад
hey nice video 🫶 where to get ttn text node?
@c0nsumption
@c0nsumption 11 месяцев назад
This video shows the process: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-ElmyIsvMblE.htmlsi=ej88H8_35b1N2cb9
@2amto3am
@2amto3am 11 месяцев назад
Can we do image to image??
@c0nsumption
@c0nsumption 11 месяцев назад
This is image to image it’s just converting the video for you. If you want just use the node from the beginning of the video. Am I reading your question correctly? 🤔
@aoi_andorid
@aoi_andorid 10 месяцев назад
Is anyone using AI to generate workflow for Comfy UI? Please let me know if you know of any useful links.
@c0nsumption
@c0nsumption 10 месяцев назад
I don’t understand the question. ComfyUI is literally an AI powered software
@aoi_andorid
@aoi_andorid 10 месяцев назад
I thought that if GPT could recognize and learn from a large number of json files and images showing workflows, it would be possible to generate workflows in natural language! (I used DeepL for the translation, so I apologize if I was rude in my wording;)@@c0nsumption
@skycladsquirrel
@skycladsquirrel 11 месяцев назад
Awesome tutorial. I'm using the Controlnet set for the next one. Here's my latest video: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-xqRcZ2RGS6s.html
@nft_bilder_art2098
@nft_bilder_art2098 11 месяцев назад
please tell me why I got this error?? when I launch COMFYUI... D:\comfuUI\ComfyUI>python main.py ** ComfyUI start up time: 2023-10-17 05:30:32.177484 Prestartup times for custom nodes: 0.0 seconds: D:\comfuUI\ComfyUI\custom_nodes\ComfyUI-Manager Traceback (most recent call last): File "D:\comfuUI\ComfyUI\main.py", line 69, in import comfy.utils File "D:\comfuUI\ComfyUI\comfy\utils.py", line 1, in import torch ModuleNotFoundError: No module named 'torch'
@nft_bilder_art2098
@nft_bilder_art2098 11 месяцев назад
before this there was no such error at start
@nft_bilder_art2098
@nft_bilder_art2098 11 месяцев назад
Maybe I'm launching it wrong somehow? Thank you in advance for your cooperation!
@nft_bilder_art2098
@nft_bilder_art2098 11 месяцев назад
all okay, I watched your last video, I figured it out, thank you very much
@c0nsumption
@c0nsumption 11 месяцев назад
Love that you internally said “I’m figuring this out dammit! “ lol. Good job 👍🏽
@pauliuscreative
@pauliuscreative 11 месяцев назад
My original input video was 7 seconds and the output video I got is slower which is 12 seconds. Do you know why?
@c0nsumption
@c0nsumption 11 месяцев назад
Check you output frame rate
@saiya3725
@saiya3725 11 месяцев назад
Hey when i drag from the pre text input im not getting the ttn text node option. What am i missing?
@saiya3725
@saiya3725 11 месяцев назад
i installed tinyterra and got it
@c0nsumption
@c0nsumption 11 месяцев назад
@@saiya3725 👍🏽 Good job figuring it out
Далее
AWAKENED THE UNKNOWN
00:17
Просмотров 1,6 млн
This RAG AI Agent with n8n + Supabase is the Real Deal
16:27
The BEST AI Video Model Is Out & FREE!
12:44
Просмотров 175 тыс.
How To Use AnimateDiff for Video To Video in ComfyUI
9:21