Тёмный
How Do?
How Do?
How Do?
Подписаться
Tutorials on Generative AI, game dev, and more.
Stable Cascade in ComfyUI Made Simple
6:57
4 месяца назад
Local, Free-Range, AI Chat
9:36
6 месяцев назад
Install ComfyUI from Scratch
14:58
6 месяцев назад
Colorize and Restore Old Images
3:00
6 месяцев назад
Reimagine Any Image in ComfyUI
10:25
7 месяцев назад
How Do Stable Video Diffusion?
13:25
7 месяцев назад
Комментарии
@soljr9175
@soljr9175 12 дней назад
Your workflow link doesnt work. It would have be nice if you included in Hugginface.
@kevint.8553
@kevint.8553 18 дней назад
I successfully installed the Manager, but don't see the manager options on the UI page.
@lukeovermind
@lukeovermind 20 дней назад
Thanks having a simple and advaced face detailer is clever . Going to try it, Got a sub from me, keep going!
@bobwinberry
@bobwinberry Месяц назад
Thanks for your videos! They worked great, but now (due to updates?) This workflow no longer works, seems to be lacking the BNK_Unsampler. Is there a work around for this? I've tried but aside from stumbling around, this si way over my head. Thanks for any help you might have and thanks again for the videos - well done!
@FiXANoNada
@FiXANoNada Месяц назад
finally, some guide that i can comprehend and follow, and then even play around with it. You are so kind to even list all the resources in the description in an well-organized manner. instant sub from me.
@meadow-maker
@meadow-maker Месяц назад
you don't explain how to set the node up?????
@nawafalhinai1643
@nawafalhinai1643 Месяц назад
were should i put all that files in the links?
@IMedzon
@IMedzon Месяц назад
Useful video thanks!
@jbnrusnya_should_be_punished
@jbnrusnya_should_be_punished Месяц назад
Interesting, but the 2nd method does not work for me. No matter what the resolution, I always get this error: Error occurred when executing FaceDetailer: The size of tensor a (0) must match the size of tensor b (256) at non-singleton dimension 1 File "C:\Users\Alex\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all)
@CornPMV
@CornPMV Месяц назад
One question: What can I do if I have several people in my picture, e.g. in the background? Can I somehow influence Facedetailer to only refine the main person in the middle?
@maxehrlich
@maxehrlich 27 дней назад
probably crop that section run the fix and image composite it back in
@zhongxun2005
@zhongxun2005 2 месяца назад
Thank you for sharing! Subscribed:) I have a question about AIO Aux Preprocessor in 2 SDXL workflow. I don't see LineartStandardPreprocessor option, closet one is LineartPreprocessor, but it throws error "Error occurred when executing AIO_Preprocessor: LineartDetector.from_pretrained() got an unexpected keyword argument 'cache_dir'"
@zhongxun2005
@zhongxun2005 2 месяца назад
Never mind, I have it resolved. I replaced both with "[Inference.Core] AIO Aux Preprocessor" which has the option. Hope this could help others
@PavewayIII-gbu24
@PavewayIII-gbu24 2 месяца назад
Great tutorial, thank you
2 месяца назад
This is a wonderfully good job! I just found it and it works amazingly well! Do you have a workflow that does the same thing img2img?
@goactivemedia
@goactivemedia 2 месяца назад
When I run this I get - The operator 'aten::upsample_bicubic2d.out' is not currently implemented for the MPS device?
@Mranshumansinghr
@Mranshumansinghr 2 месяца назад
Much Better explanation of Cascade in Comfiui. Thank you. Will try this today. The b to c and then a is a bit confusing and works sometimes. This is much simpler and requires less files.
@aliyilmaz852
@aliyilmaz852 2 месяца назад
Amazing share! Thanks again, I am old and have lots of b/w photos. Will give it a try. And If I can I will try to swap the faces with current ones :) Maybe you can teach us how to swap faces, will definitely appreciate it!
@PIQK.A1
@PIQK.A1 2 месяца назад
how to facedetail vid2vid?
@cheezeebred
@cheezeebred 2 месяца назад
I'm missing the BNK_UNsampler and cant find it via google search. What am i doing wrong? Cant find it in manager either
@lumina36
@lumina36 3 месяца назад
im amazed that no one has ever thought of Combining Stable Forge with Both Krita and Cascade it would actually solve a lot of problems
@SumNumber
@SumNumber 3 месяца назад
This is cool but it is just about impossible to see how you connected all these nodes together so it did not help me at all. :O)
@HowDoTutorials
@HowDoTutorials 3 месяца назад
Yeah I’ve been working on making things a little easier to parse going forward. There’s a link to the workflow in the description if you want to load it up and poke around a bit.
@aliyilmaz852
@aliyilmaz852 3 месяца назад
thanks for the great explanation, hope you do videos like that more.
@focus678
@focus678 3 месяца назад
What is GPU spec you are using?.
@HowDoTutorials
@HowDoTutorials 3 месяца назад
I'm using a 3090 which is probably something I should mention going forward so people can set their expectations properly. 😅
@onurc.6944
@onurc.6944 3 месяца назад
When it comes to svd decoder the connection is lost :(
@HowDoTutorials
@HowDoTutorials 3 месяца назад
Sorry to hear it's giving you trouble. Here are a couple things to try: 1. Make sure you're using the correct decoder model for your SVD model. (e.g. If using the "xt" model be sure you're using the "xt" decoder) 2. You may be running out of memory. Try lowering the `video_frames` parameter. You might also try using the non-xt model and decoder.
@onurc.6944
@onurc.6944 3 месяца назад
Thanks ur help :) I can work without image_decoder@@HowDoTutorials
@RuinDweller
@RuinDweller 3 месяца назад
After I discovered ComfyUI, my life changed forever. It has been a dream of mine for 5 years now to be able to run models and manipulate their latent spaces locally. ...But then I discovered just how hard it is for a noob like me to get a lot of these workflows working - at all - even after downloading and installing all of the models required, in the proper versions, and all of the nodes loaded and running together normally. This, was one of about 3 that actually worked for me, and it is BY FAR my favorite one. It was downloaded as a "color restorer" and it works beautifully for that purpose, but I was so excited to see it featured in this video, because it already works for me! Now I can unlock its full potential, and it turns out all I needed were the proper prompts! THANK YOU so much for making these workflows, and these video tutorials, I can't tell you how much you've helped me! If you ever decide to update any of this to utilize SDXL, I am so on that...
@HowDoTutorials
@HowDoTutorials 3 месяца назад
I loved reading this comment and I'm so happy I could help make this tech a bit more accessible. Here's a version of the "Reimagine" workflow updated for SDXL: comfyworkflows.com/workflows/4fc27d23-faf3-4997-a387-2dd81ed9bcd1 You'll also need these additional controlnets for SDXL: huggingface.co/stabilityai/control-lora/tree/main/control-LoRAs-rank128 Have fun and don't hesitate to reach out here if you run into any issues!
@RuinDweller
@RuinDweller 3 месяца назад
@@HowDoTutorials I thought I had already responded to this, but apparently I didn't! Anyway THANK YOU for posting the link to that workflow! It's running, but I can't get it to colorize any more, which was my main use for it. :( Oh well, it can still edit B/W images, and then I can colorize them in the other workflow, but I would love to be able to do both things in one. I can colorize things, but not people. I've tried every conceivable prompt. :(
@HowDoTutorials
@HowDoTutorials 3 месяца назад
@@RuinDweller I've been having trouble getting it to work as well. Seems there's something about SDXL that doesn't play with that use case quite as well. I'll keep at it and let you know if I figure something out.
@jroc6745
@jroc6745 3 месяца назад
This looks great thanks for sharing. How can this be altered for img2img?
@HowDoTutorials
@HowDoTutorials 3 месяца назад
Here's a modified workflow: comfyworkflows.com/workflows/cd47fbe6-68cc-4f40-8646-dfc62d32eeb4
@mikrodizels
@mikrodizels 3 месяца назад
That FaceDetailer looks amazing, I like creating images with multiple people in them, so faces are the bane of my existence
@amorgan5844
@amorgan5844 3 месяца назад
Its the most discouraging part of making ai art
@greypsyche5255
@greypsyche5255 3 месяца назад
try hands.
@MultiSunix
@MultiSunix 3 месяца назад
This is great and helpful, thank you!
@teenudahiya01
@teenudahiya01 3 месяца назад
hi can you help me to solve this error "module diffuser has no attribute StableCascadeUnet. i installed cascade install in stable diffusion but i got this error after installing of all model on window 11
@HowDoTutorials
@HowDoTutorials 3 месяца назад
It sounds like your diffusers package may be out of date. If you haven’t already, try updating ComfyUI. If you have the Windows portable install you can go into ComfyUI_windows_portable/update folder and run `update_comfyui_and_python_dependencies.bat`.
@97BuckeyeGuy
@97BuckeyeGuy 3 месяца назад
You have an interesting cadence to your speech. Is this a real voice or AI?
@HowDoTutorials
@HowDoTutorials 3 месяца назад
A bit of both. I record the narration with my real voice, edit out the spaces and ums (mostly), and then pass it through ElevenLabs speech to speech.
@97BuckeyeGuy
@97BuckeyeGuy 3 месяца назад
@@HowDoTutorials That explains why I kept going back and forth with my opinion on this. Thank you 👍🏼
@lukeovermind
@lukeovermind 20 дней назад
@@HowDoTutorials Thats very clever. its a very soothing voice
@jocg9168
@jocg9168 3 месяца назад
Great workflow for fix. I'm wondering, with proper scenes where characters are actually not looking at the camera, like 3/4, view looking phone, using tablet or something, not like creepy looking the camera, I'm wondering if I'm the only one who gets bad results on type of images. But I will definitely try this new fix. Thanks for the tip.
@JonDankworth
@JonDankworth 3 месяца назад
Stable Cascade takes too long only to create images that are not truly better
@HowDoTutorials
@HowDoTutorials 3 месяца назад
I agree for the most part. There are a few things it can do better than other models without special nodes, such as text and higher resolutions, but in general I think its strengths won’t really show until some fine tunes come out. That said, given its current licensing and the upcoming SD3 release, that may not matter much either.
@AngryApple
@AngryApple 3 месяца назад
would a Lightning Model be a plug and play replacement for this, just because of the different License
@HowDoTutorials
@HowDoTutorials 3 месяца назад
I've tested the JuggernautXL lightning model and it works great without any modification to the workflow. Some models may work better with different schedulers, cfg, etc., but in general they should work fine.
@AngryApple
@AngryApple 3 месяца назад
@@HowDoTutorials I will try it, thanks
@JefHarrisnation
@JefHarrisnation 3 месяца назад
This was a huge help, especially showing where the models go. Running smooth and producing some very nice results.
@kamruzzamanuzzal3764
@kamruzzamanuzzal3764 3 месяца назад
SO that's how you correctly use turbo models, till now I used 20 steps with turbo models, and just 1 pass, it seems using 2 pass with 5 steps each is much much better, what about using deep shrink alongside it?
@HowDoTutorials
@HowDoTutorials 3 месяца назад
I just played around with it a bit and it doesn’t seem to have much of an effect on this workflow, likely because of the minimal upscaling and lower denoise value, but thanks for bringing that node to my attention! I can definitely see a lot of other uses for it. EDIT: I realized I was using it incorrectly by trying to inject it into the second pass. Once I figured out how to use it properly, I could definitely see the potential. It's hard to tell whether the Kohya by itself is better than the two pass or not, but Kohya into a second pass is pretty great. I noticed that reducing CFG and steps for the second pass is helpful to reduce the "overbaked" look.
@rovi-farmiigranhermanodela8693
@rovi-farmiigranhermanodela8693 3 месяца назад
What about all that videos where they use inpainting tools to edit pictures or to aplay "filters" wich AI can do that?
@HowDoTutorials
@HowDoTutorials 3 месяца назад
You can do that with ComfyUI too, though in-painting can be done a bit more easily with AUTOMATIC1111. I don’t have a video covering in-painting yet, but this method can give you something like the “filters” you mentioned: Reimagine Any Image in ComfyUI ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-CRURtIltf58.html
@AkoZoom
@AkoZoom 3 месяца назад
very easy step by step tuto ! thank you ! But my rtx3060 12Go takes about near 2min for the 4 images and the last (which has en H special) is different also (?)
@HowDoTutorials
@HowDoTutorials 3 месяца назад
You may want to try using the lite models or adjusting the resolution down to 1024x1024 to improve generation speed. You may also have better luck using the new models specifically for ComfyUI. Here's an updated tutorial: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-GOnMXejA8Fc.html
@AkoZoom
@AkoZoom 3 месяца назад
. @HowDoTutorials Oh yep ty ! then the models are no more in UNet folder.. but in regular checkpoints folder.
@kamruzzamanuzzal3764
@kamruzzamanuzzal3764 4 месяца назад
question, what happened to stable cascade stage a (VAE)? I don't see it edit: ok got the answer, another person already asked it before Anyway, subscribed. cause not many ppl experimenting with stable cascade and sharing their findings like you
@WalidDingsdale
@WalidDingsdale 4 месяца назад
I really have not figured out the applicablility of cascade yet, thanks for sharing this all the same
@HowDoTutorials
@HowDoTutorials 4 месяца назад
I’ve noticed its biggest strengths are composition and text while still allowing variety in output. There are some great fine tunes for SDXL out there that offer better composition for certain styles, but can be more limited in their breadth. Honestly though, I think the main upside of Stable Cascade is not the current checkpoint, but the method and how it allows for creating fine tunes at a reduced cost.
@andrewqUA
@andrewqUA 4 месяца назад
Where is vae "stage_a"? or is this not necessary?
@HowDoTutorials
@HowDoTutorials 4 месяца назад
Not necessary as a separate model for this method. It’s been baked in as the VAE of the stage b checkpoint for the ComfyUI-specific models.
@TinusvdMerwe
@TinusvdMerwe 4 месяца назад
Fantastic, I appreciate the time taken to detail explain some concepts and the general easy , unhurried tone
@Vectorr66
@Vectorr66 4 месяца назад
Are you on discord?
@HowDoTutorials
@HowDoTutorials 4 месяца назад
Not currently, but it's probably about time for me to make an account and get on there. 😅
@Vectorr66
@Vectorr66 4 месяца назад
I do wish you can make the noodles less noticable ha
@HowDoTutorials
@HowDoTutorials 4 месяца назад
Usually I'll adjust it for myself to make things look cleaner, but it makes it harder to see what the connections are so I switch it to ultimate noodle mode for videos. You can change it by clicking the gear in the menu to the right and switching the Link Render mode.
@Vectorr66
@Vectorr66 4 месяца назад
I do agree with the overbaked look.
@Vectorr66
@Vectorr66 4 месяца назад
Thanks!
@Metalman750BC
@Metalman750BC 4 месяца назад
Excellent! Great explanation.
@KarlitoStudio
@KarlitoStudio 4 месяца назад
thanks... is there a workflow to fix faces and hands with cascade?🤗
@HowDoTutorials
@HowDoTutorials 4 месяца назад
I've been working on one for faces that I'll be sharing in an upcoming video. In the meantime, explore this node (github.com/mav-rik/facerestore_cf) in combination with HiRes fix. For hands, you might try HandRefinder (github.com/wenquanlu/HandRefiner) which is included with the controlnex_aut preprocessors for ComfyUI (github.com/Fannovel16/comfyui_controlnet_aux).
@Vectorr66
@Vectorr66 4 месяца назад
I see there is an update, is there a video coming for that? Do we still need this workflow that is on here for comfy? Sorry I am new to comfy and just curious. Thanks!
@HowDoTutorials
@HowDoTutorials 4 месяца назад
I’m actually editing the video right now! It’ll cover the new method as well as some techniques I’ve discovered. Should be up by this evening. 😁
@Vectorr66
@Vectorr66 4 месяца назад
@@HowDoTutorials thanks!
@entertainmentchannel9632
@entertainmentchannel9632 4 месяца назад
I get this error(AttributeError 16 KSamplerAdvanced): ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch n\modules\module.py", line 1688, in __getattr__ raise AttributeError(f"'{type(self).__name__}' object has no attribute '{name}'") AttributeError: 'ModuleList' object has no attribute '1'
@HowDoTutorials
@HowDoTutorials 4 месяца назад
It's likely you are attempting to use an SDXL checkpoint with SD1.5 control nets. To fix, either switch to a sd-1.5 based checkpoint or use control net models for SDXL. You can find links to the SDXL controlnets here: huggingface.co/docs/diffusers/v0.20.0/en/api/pipelines/controlnet_sdxl
@Luxcium
@Luxcium 4 месяца назад
I would like to have seen the connections between the nodes it was a great video thank
@HowDoTutorials
@HowDoTutorials 4 месяца назад
I appreciate the feedback! Someone else mentioned that as well and I’ll definitely keep it in mind going forward. Here’s the link to download the workflow if you want to explore it a bit: comfyworkflows.com/workflows/15b50c1e-f6f7-447b-b46d-f233c4848cbc
@Luxcium
@Luxcium 4 месяца назад
@@HowDoTutorials I don’t need a link (which is nice tho) I hope you get more free time to make more content it is interesting and I personally love the idea of flow charts a lot and I would love someone who has your skills to explain all nodes and the logic behind them and how the connections are made in different ways to get more workflows and getting insights from people like you is really interesting I understand the possibility of time constraints but I think you should be aware that the stuff you make is really useful and valuable
@HowDoTutorials
@HowDoTutorials 4 месяца назад
Thank you for the encouragement @Luxcium. I really enjoy making this content and am grateful people are finding it useful. It is indeed a struggle to find time to make videos at the moment, but I’m hoping to get closer to a weekly release schedule soon. I appreciate your support and will keep your suggestions for content in mind.
@snordtjr
@snordtjr 4 месяца назад
Very cool, will try this out