Thanks for your videos! They worked great, but now (due to updates?) This workflow no longer works, seems to be lacking the BNK_Unsampler. Is there a work around for this? I've tried but aside from stumbling around, this si way over my head. Thanks for any help you might have and thanks again for the videos - well done!
finally, some guide that i can comprehend and follow, and then even play around with it. You are so kind to even list all the resources in the description in an well-organized manner. instant sub from me.
Interesting, but the 2nd method does not work for me. No matter what the resolution, I always get this error: Error occurred when executing FaceDetailer: The size of tensor a (0) must match the size of tensor b (256) at non-singleton dimension 1 File "C:\Users\Alex\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all)
One question: What can I do if I have several people in my picture, e.g. in the background? Can I somehow influence Facedetailer to only refine the main person in the middle?
Thank you for sharing! Subscribed:) I have a question about AIO Aux Preprocessor in 2 SDXL workflow. I don't see LineartStandardPreprocessor option, closet one is LineartPreprocessor, but it throws error "Error occurred when executing AIO_Preprocessor: LineartDetector.from_pretrained() got an unexpected keyword argument 'cache_dir'"
Much Better explanation of Cascade in Comfiui. Thank you. Will try this today. The b to c and then a is a bit confusing and works sometimes. This is much simpler and requires less files.
Amazing share! Thanks again, I am old and have lots of b/w photos. Will give it a try. And If I can I will try to swap the faces with current ones :) Maybe you can teach us how to swap faces, will definitely appreciate it!
Yeah I’ve been working on making things a little easier to parse going forward. There’s a link to the workflow in the description if you want to load it up and poke around a bit.
Sorry to hear it's giving you trouble. Here are a couple things to try: 1. Make sure you're using the correct decoder model for your SVD model. (e.g. If using the "xt" model be sure you're using the "xt" decoder) 2. You may be running out of memory. Try lowering the `video_frames` parameter. You might also try using the non-xt model and decoder.
After I discovered ComfyUI, my life changed forever. It has been a dream of mine for 5 years now to be able to run models and manipulate their latent spaces locally. ...But then I discovered just how hard it is for a noob like me to get a lot of these workflows working - at all - even after downloading and installing all of the models required, in the proper versions, and all of the nodes loaded and running together normally. This, was one of about 3 that actually worked for me, and it is BY FAR my favorite one. It was downloaded as a "color restorer" and it works beautifully for that purpose, but I was so excited to see it featured in this video, because it already works for me! Now I can unlock its full potential, and it turns out all I needed were the proper prompts! THANK YOU so much for making these workflows, and these video tutorials, I can't tell you how much you've helped me! If you ever decide to update any of this to utilize SDXL, I am so on that...
I loved reading this comment and I'm so happy I could help make this tech a bit more accessible. Here's a version of the "Reimagine" workflow updated for SDXL: comfyworkflows.com/workflows/4fc27d23-faf3-4997-a387-2dd81ed9bcd1 You'll also need these additional controlnets for SDXL: huggingface.co/stabilityai/control-lora/tree/main/control-LoRAs-rank128 Have fun and don't hesitate to reach out here if you run into any issues!
@@HowDoTutorials I thought I had already responded to this, but apparently I didn't! Anyway THANK YOU for posting the link to that workflow! It's running, but I can't get it to colorize any more, which was my main use for it. :( Oh well, it can still edit B/W images, and then I can colorize them in the other workflow, but I would love to be able to do both things in one. I can colorize things, but not people. I've tried every conceivable prompt. :(
@@RuinDweller I've been having trouble getting it to work as well. Seems there's something about SDXL that doesn't play with that use case quite as well. I'll keep at it and let you know if I figure something out.
hi can you help me to solve this error "module diffuser has no attribute StableCascadeUnet. i installed cascade install in stable diffusion but i got this error after installing of all model on window 11
It sounds like your diffusers package may be out of date. If you haven’t already, try updating ComfyUI. If you have the Windows portable install you can go into ComfyUI_windows_portable/update folder and run `update_comfyui_and_python_dependencies.bat`.
Great workflow for fix. I'm wondering, with proper scenes where characters are actually not looking at the camera, like 3/4, view looking phone, using tablet or something, not like creepy looking the camera, I'm wondering if I'm the only one who gets bad results on type of images. But I will definitely try this new fix. Thanks for the tip.
I agree for the most part. There are a few things it can do better than other models without special nodes, such as text and higher resolutions, but in general I think its strengths won’t really show until some fine tunes come out. That said, given its current licensing and the upcoming SD3 release, that may not matter much either.
I've tested the JuggernautXL lightning model and it works great without any modification to the workflow. Some models may work better with different schedulers, cfg, etc., but in general they should work fine.
SO that's how you correctly use turbo models, till now I used 20 steps with turbo models, and just 1 pass, it seems using 2 pass with 5 steps each is much much better, what about using deep shrink alongside it?
I just played around with it a bit and it doesn’t seem to have much of an effect on this workflow, likely because of the minimal upscaling and lower denoise value, but thanks for bringing that node to my attention! I can definitely see a lot of other uses for it. EDIT: I realized I was using it incorrectly by trying to inject it into the second pass. Once I figured out how to use it properly, I could definitely see the potential. It's hard to tell whether the Kohya by itself is better than the two pass or not, but Kohya into a second pass is pretty great. I noticed that reducing CFG and steps for the second pass is helpful to reduce the "overbaked" look.
You can do that with ComfyUI too, though in-painting can be done a bit more easily with AUTOMATIC1111. I don’t have a video covering in-painting yet, but this method can give you something like the “filters” you mentioned: Reimagine Any Image in ComfyUI ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-CRURtIltf58.html
very easy step by step tuto ! thank you ! But my rtx3060 12Go takes about near 2min for the 4 images and the last (which has en H special) is different also (?)
You may want to try using the lite models or adjusting the resolution down to 1024x1024 to improve generation speed. You may also have better luck using the new models specifically for ComfyUI. Here's an updated tutorial: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-GOnMXejA8Fc.html
question, what happened to stable cascade stage a (VAE)? I don't see it edit: ok got the answer, another person already asked it before Anyway, subscribed. cause not many ppl experimenting with stable cascade and sharing their findings like you
I’ve noticed its biggest strengths are composition and text while still allowing variety in output. There are some great fine tunes for SDXL out there that offer better composition for certain styles, but can be more limited in their breadth. Honestly though, I think the main upside of Stable Cascade is not the current checkpoint, but the method and how it allows for creating fine tunes at a reduced cost.
Usually I'll adjust it for myself to make things look cleaner, but it makes it harder to see what the connections are so I switch it to ultimate noodle mode for videos. You can change it by clicking the gear in the menu to the right and switching the Link Render mode.
I've been working on one for faces that I'll be sharing in an upcoming video. In the meantime, explore this node (github.com/mav-rik/facerestore_cf) in combination with HiRes fix. For hands, you might try HandRefinder (github.com/wenquanlu/HandRefiner) which is included with the controlnex_aut preprocessors for ComfyUI (github.com/Fannovel16/comfyui_controlnet_aux).
I see there is an update, is there a video coming for that? Do we still need this workflow that is on here for comfy? Sorry I am new to comfy and just curious. Thanks!
I get this error(AttributeError 16 KSamplerAdvanced): ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch n\modules\module.py", line 1688, in __getattr__ raise AttributeError(f"'{type(self).__name__}' object has no attribute '{name}'") AttributeError: 'ModuleList' object has no attribute '1'
It's likely you are attempting to use an SDXL checkpoint with SD1.5 control nets. To fix, either switch to a sd-1.5 based checkpoint or use control net models for SDXL. You can find links to the SDXL controlnets here: huggingface.co/docs/diffusers/v0.20.0/en/api/pipelines/controlnet_sdxl
I appreciate the feedback! Someone else mentioned that as well and I’ll definitely keep it in mind going forward. Here’s the link to download the workflow if you want to explore it a bit: comfyworkflows.com/workflows/15b50c1e-f6f7-447b-b46d-f233c4848cbc
@@HowDoTutorials I don’t need a link (which is nice tho) I hope you get more free time to make more content it is interesting and I personally love the idea of flow charts a lot and I would love someone who has your skills to explain all nodes and the logic behind them and how the connections are made in different ways to get more workflows and getting insights from people like you is really interesting I understand the possibility of time constraints but I think you should be aware that the stuff you make is really useful and valuable
Thank you for the encouragement @Luxcium. I really enjoy making this content and am grateful people are finding it useful. It is indeed a struggle to find time to make videos at the moment, but I’m hoping to get closer to a weekly release schedule soon. I appreciate your support and will keep your suggestions for content in mind.